For questions or
comments, contact Ben Bederson (firstname.lastname@example.org)
Perlin, NYU Media Research Laboratory
"Making Believable Responsive Animated Characters" (Abstract)
September 5, 2:00pm, 3258 A.V. Williams Building
Carroll, Virginia Tech
"MOOsburg: Supplementing a real community with a virtual community" (Abstract)
September 26, 2:00pm, 3258 A.V. Williams Building
Igarishi, Brown University
"Freeform User Interfaces for Graphical Computing"(Abstract)
October 10, 2:00pm, 3258 A.V. Williams Building
Card, Xerox PARC
"Foraging the Web" (Abstract)
October 23, 4:00pm, Classroom Building, CLB 0111 - CS Colloquium
"Information Scent and Visual Attention in a Focus+Context Tree Visualization" (Abstract)
October 24, 2:00pm, 3258 A.V. Williams Building - HCIL Seminar
Dey, Georgia Tech
"The Context Toolkit" (Abstract)
November 14, 2:00pm, 3258 A.V. Williams Building
|DanaŽ Stanton, University of Nottingham
"Virtual Environments to aid Spatial Cognition in Disabled Children and the Elderly"(Abstract)
November 28, 2:00pm, 3258 A.V. Williams Building
Mankoff , Georgia Tech
"Interface techniques for handling recognition errors and ambiguity in recognition-based input"(Abstract)
December 12, 2:00pm, 3258 A.V. Williams Building
Perlin: "Making Believable Responsive Animated Characters"
Our laboratory is engaged in a number of research areas that aim to increase the sense of interaction and involvement for users of computer mediated information. In one branch of this research, we have been developing approaches and tools to create believable and engaging responsive animated characters. One major challenge in such research is to get users to psychologically "buy into" the character. To do this effectively requires an interesting combination of technology and aesthetics.
This talk will cover our work in this area,
as well as the challenges and opportunities for designers of automated
multimedia delivery and navigation tools, and some of the open research
problems left to be explored. Time permitting, the talk will also touch
on various other research projects ongoing in our laboratory.
MOOsburg is a community-oriented multi-user domain. It was created to enrich the Blacksburg Electronic Village by providing real-time, situated interaction, and a place-based model for community information. It is an interesting testbed for investigating a wide variety of issues ranging from participatory design and universal access to programming to new software paradigms for multi-user domain functionality and map-based navigation in virtual environments.
John M. Carroll is Professor of Computer Science, Education, and Psychology, and Director of the Center for Human-Computer Interaction all at Virginia Tech. His research interests include methods and theory in human-computer interaction, particularly as applied to networking tools for collaborative learning activities.
It is difficult to communicate graphical ideas or images to computers using current WIMP-style GUI. Users have to decompose the graphics desired in their minds into simple elements such as points, lines, or boxes, and manipulate those elements using click-and-drag operations. On the other hand, people have long used simple drawings based on freeform strokes to express arbitrary visual messages quickly. Freeform User Interfaces is an interface design framework that leverages the power of freeform strokes to achieve fluent interaction between users and computers in performing graphical tasks. In this talk, I will introduce the basic idea and three examples of the freeform user interfaces. Pegasus is a drawing system that beautifies the user's freeform strokes and predicts the next drawings automatically. Teddy is a 3D modeling system where the user can construct rotuned 3D models quickly just by drawing 2D outlines. Flatland is an electronic office whiteboard that features flexible screen space control, various sketch-based applications, and automatic history management. You can find the description and demos at http://www.mtl.t.u-tokyo.ac.jp/~takeo/research/Projects.html
The amount of information readily available to a person has increased rapidly and is now staggering. It is important to understand and characterize how people find and exploit such information, for example to inform the design of new information appliances. The greatest information expansion, of course, comes from the Web. Fortunately and very interestingly, the Web also allows for the ready instrumentation of user information interaction in a way not previously possible. This important development enables the study of information-intensive user behavior at lower cost and grander scale than previously practical. I will show results of studies characterizing user behavior while trying to find things on the Web. It turns out that some methods of classical Human-Computer Interaction fail, forcing us to deal with information content as well as form and presaging a subtle shift from Human-Computer Interaction to what might be called Human-Information Interaction.
"Information Scent and Visual Attention in a Focus+Context Tree Visualization"
Focus + context information visualizations have sought to amplify human cognition by decreasing the cost structure of information to the user. They have sought to this by using the display space non-uniformly, making more detail available and using more spatial resource for information predicted to be of interest to the user. But the details of this process and whether it works have never been studied. I will describe how the focus + context distortion of the Hyperbolic Tree browser affects information foraging behavior in a task similar to the CHI '97 Browse Off. There appear to be two countervailing processes affecting visual attention in these displays: strong information scent expands the spotlight of attention whereas crowding of targets in the compressed region of the Hyperbolic Tree narrows it. We can understand these effects somewhat by combining results from theories of information foraging and visual attention. These help us to understand the mechanics of information visualization.
Stuart Card is a Xerox Research Fellow and the manager of the User Interface Research group at the Xerox Palo Alto Research Center. With Allen Newell and Tom Moran from CMU, he founded an effort to develop models of human performance usable in information system design. His thesis at CMU was the first thesis specifically in the new specialty of human-computer interaction. His study of input devices led to the Fitts's Law characterization of the mouse and was a main factor leading to the mouse's commercial introduction. He and his group have developed a number of theories of human-machine interaction, including the Model Human Processor, the GOMS theory of user interaction, and information foraging theory. They have developed new paradigms of human-machine interaction, including the Rooms workspace manager and the Information Visualizer. Work in his group has resulted in ten Xerox products and the founding of Inxight Software, Inc, ContentGuard, and GroupFire, Inc.. Card is a co-author of the book The Psychology of Human-Computer Interaction, a co-editor of the book, Human Performance Models for Computer-Aided Engineering, and has served on many editorial boards. He received his A. B. in Physics from Oberlin College and his Ph.D. in Psychology from Carnegie Mellon University, where he pursued an interdisciplinary program in psychology, artificial intelligence, and computer science. His most recent book, Readings in Information Visualization, co-written and edited with Jock Mackinlay and Ben Shneiderman, was published in January.
Context is an important, yet poorly understood
and poorly utilized source of information in interactive computing. It
will be of particular importance in the new millennium as users move away
from their desktops and into settings where their contexts are changing
rapidly. Context is difficult to use because, unlike other forms of user
input, there is no common, reusable way to handle it. As a result, context-aware
applications have been built in an ad hoc manner, making it difficult to
build new applications or evolve existing ones. To make it easier to build
these applications, we have created the Context Toolkit, a toolkit that
provides some important abstractions and support for the field of context-aware
computing. In this talk, I will discuss how context can be used to enhance
existing applications and how to support application builders in building
context-aware applications. I will also describe a number of applications
built with the Context Toolkit. More information on the toolkit is available
This presentation will outline experimental work on the use of virtual
environments in assessing and improving spatial skills in people with
physical disabilities. The first study presents a novel paradigm for
investigating configural learning in humans, based on a shortcut study
previously used with hamsters. The shortcut behaviour of physically disabled
children, with varying degrees of mobility, was examined. Results indicated
that children who had had limited mobility from birth were poorer at the
task than those whose mobility had deteriorated with age, supporting the
hypothesis that early independent exploration is important in the
development of cognitive spatial mapping ability. Secondly, I will discuss
transfer of spatial information, making reference to a study which involved
physically disabled children exploring a simulation of a school and then
completing tests of spatial ability within the equivalent real school. A
successful transfer of spatial skills was demonstrated and thus the
potential of this technology for training. I shall then outline a series of
studies that examined the effect of repeated exposure to virtual
environments and confirmed that the skills disabled children acquired using
virtual environments improved with exposure to successive environments.
Brief conclusions from these experiments will be drawn. Finally, I will
report on work in progress, including the design of virtual environments for
the elderly to support social inclusion rather than exclusion and the use of
a three storey simulation to investigate vertical spatial encoding.
Because of its promise of natural interaction, recognition is coming into its own as a mainstream technology for use with computers. Both commercial and research applications are beginning to use it extensively. However the errors made by recognizers can be quite costly, and this is increasingly becoming a focus for researchers. We have done a survey of existing error correction techniques. These techniques most commonly fall into one of two strategies, repetition and choice. Based on the needs uncovered by this survey, we have developed OOPS, a toolkit that supports resolution of input ambiguity through mediation. I will describe the functionality supported by OOPS, and illustrate it with four new interaction techniques. These interaction techniques each address problems not directly handled by standard error correction techniques, and can all be re-used in a variety of settings.