Saliency-guided Enhancement for Volume Visualization
Youngmin Kim and Amitabh Varshney


Overview

We present a visual-saliency-based operator to enhance selected regions of a volume. We show how we use such an operator on a user-specified saliency field to compute an emphasis field. We further discuss how the emphasis field can be integrated into the visualization pipeline through its modifications of regional luminance and chrominance. Finally, we validate our work using an eye-tracking-based user study and show that our new saliency enhancement operator is more effective at eliciting viewer attention than the traditional Gaussian enhancement operator.



Motivation

  Visible Male - No change Visible Male - Saliency-guided

Comprehensible depiction of large volume datasets has been a long-standing challenge. Transfer functions have been used to help visualize the features and details in volumes by assigning varying optical properties such as color and opacity to different densities of a volumetric scalar field. Significant advances have been made in the art and the science of devising transfer functions that successfully show the inherent structures within a given volume dataset. Despite these impressive advances the transfer functions remain a mapping of the physical appearance to the local geometric attributes such as the local density of the scalar field and its first and higher-order derivatives.

As the volume datasets have grown in complexity, so too has the need to emphasize and draw visual attention to appropriate regions in their visualization. This paper addresses the growing need for tools and techniques that can draw visual attention to user-specified regions in a direct volume rendering environment. Towards this goal we seek solutions based on multi-scale methods for visual saliency that can be used to guide visual attention based on varying perceptual importance.


Contributions

  Foot - No change Foot - Saliency-guided

In this research, we introduce a new visualization enhancement operator that is inspired by the center-surround mechanism of visual saliency. Our goal is to enhance human perception of the volume data by guiding a viewer's attention to specific regions of interest. Since our method considers the influence of each voxel at multiple scales, it can emphasize volumetric features at an appropriate visual scale. Existing transfer functions, based on local geometry and its derivatives, would find it difficult to achieve a similar level of multi-scale emphasis. The main contributions of this paper are:

  • We present a new saliency-based enhancement operator to guide visual attention in volume visualization.
  • We discuss augmenting the existing visualization pipeline by incorporating enhancement operators to increase the visual saliency of different regions of a volume dataset.
  • We present an eye-tracking-based user study that shows that our saliency-enhancement operator is successful in eliciting viewer attention in volume visualization.


Results

After we computed the emphasis field guided by saliency field, we used it to modulate the various visualization parameters. We explored saliency-guided alteration of brightness and color saturation for volumes. While the Gaussian operator increases the brightness/saturation of the user-specified regions, our saliency-enhancement operator additionally lowers the brightness/saturation in the neighborhood. This difference results in a much greater user attention to the desired regions, even with subtle changes to the overall parameters.

We carried out an eye-tracking-based user study to gather objective evidence of the effectiveness of our approach. First, we analyzed the effects of each enhancement technique on two different regions for each model. We did not observe significant differences in the percentage of fixations when a region was enhanced by the Gaussian-based method in any of cases. However, we can clearly observe significant differences in all cases when a region is enhanced by the Saliency-guided method. We next carried out a pairwise t-test on the percentage of fixations before and after we apply enhancement techniques for each model. We found a significant difference in the percentage of fixations when we applied saliency-guided enhancement for all the models while we could only notice small differences when we applied Gaussian-based enhancement.

Figure: Fixation Results.
Fixation results
   

Table 1: Pairwise t-tests on the 1st and the 2nd Areas of Interest.

Model

Condition

t-Value

p-Value

Foot

No Change

0.312

0.762

Region1 enhanced by Gaussian

1.35

0.248

Region1 enhanced by Saliency

2.74

0.052

Region2 enhanced by Gaussian

-0.68

0.534

Region2 enhanced by Saliency

2.96

0.042

Visible Male

No Change

0.959

0.363

Region1 enhanced by Gaussian

1.34

0.25

Region1 enhanced by Saliency

4.39

0.012

Region2 enhanced by Gaussian

-0.57

0.601

Region2 enhanced by Saliency

-5.82

0.004

Table 2: List of pairwise t-tests.

Model

Condition: No Change vs.

t-Value

p-Value

Engine Block

Gaussian-based enhancement

-2.36

0.042

Saliency-guided enhancement

2.86

0.019

Foot

Gaussian-based enhancement

2.67

0.026

Saliency-guided enhancement

3.34

0.009

Visible Male

Gaussian-based enhancement

-0.661

0.525

Saliency-guided enhancement

-6.65

< 0.001

Sheep Heart

Gaussian-based enhancement

-3.86

0.005

Saliency-guided enhancement

4.49

0.002

  

Conclusions

We have proposed a saliency-based enhancement of volume visualization and successfully validated its ability to elicit viewer attention. Our model is inspired by the center-surround mechanisms of the human visual system. Saliency-guided enhancement for volume visualization can be helpful in several contexts. For instance, our approach could be used in helping users navigate through complex volumetric datasets and facilitating their understanding by guiding their attention to regions and objects selected by a domain expert.


Publications & Supplemental Materials


Acknowledgements

This work has been supported in part by the NSF grants: IIS 04-14699, CCF 04-29753, CNS 04-03313, and CCF 05-41120. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.




© Copyright 2013, Graphics and Visual Informatics Laboratory, University of Maryland, All rights reserved.

Web Accessibility