Graphics, Visualization, and VR/AR
Graphics, Visualization, and VR/AR
The University of Maryland's Graphics and Visual Informatics Laboratory (GVIL) was established in 2000 by the Department of Computer Science and the University of Maryland Institute for Advanced Computer Studies to promote research and education in computer graphics, scientific visualization, and virtual environments. Here, we work to improve the efficiency and usability of visual computing applications in science, engineering, and medicine. The scope of this laboratory's research covers design of algorithms and data structures for reconciling realism and interactivity for very large graphics datasets, leveraging principles of visual saliency for architecting visual attention management tools, building systems for rapid access to distributed graphics datasets across memory and network hierarchies, and study of the influence of heterogeneous display and rendering devices over the visual computing pipeline. The activities of the laboratory involve development of visual computing tools and technologies to support the following research-driving applications: protein folding and rational drug design, navigation and interaction with mechanical CAD datasets, and ubiquitous access to distributed three-dimensional graphics datasets.
Our cutting-edge displays, like the Augmentarium, allow for the effective visualization of large and complex data, and for higher-level products derived from data, which are essential to engage the creativity of the human brain to find patterns and relationships that would otherwise remain unobserved.
Further information is available at http://www.cs.umd.edu/gvil/
Augmented and virtual reality (AR and VR) are poised to change our world in ways we only could have imagined a few years ago. At the University of Maryland we are working on several driving applications for next-generation virtual and augmented reality, including augmented navigation, medical training, virtual manufacturing, and immersive education. We are developing technologies in five interconnected thrust areas: scene capture and generation; tracking and registration; multimodal rendering; displays; and interfaces and usability.
For scene capture and generation, we are working on both mobile and stationary multi-camera arrays that enable us to capture the light fields of real-world immersive environments with resolution matching human visual acuity. Using one of these unique arrays, we have recorded live footage of actual surgeries at UMB’s Shock Trauma Center, and we are building towards high-fidelity telepresence using arrays of over 1000 cameras.
We are designing, developing, and validating both multimodal rendering algorithms and low-latency embedded systems that are extremely efficient, consume very little power, use information about the salient components of the scene and just-in-time tracking to scale up to very high resolution displays at high frame rates needed to maintain the illusion of immersion, vital to VR experiences.
To address interface and usability issues in VR and AR, we must understand the cause of psychophysical problems that arise from extended exposure to immersive environments. We are currently developing real-time algorithms for multi-stream visualization and data mining from EEG data on modern parallel environments to classify and quantify the onset of cybersickness.
More information is available at http://augmentarium.umd.edu.