FEEL (Force-Enhanced Egocentric Learning): A Dataset for Physical Action Understanding

Eadom Dessalene, Botao He, Michael Maynord, Yonatan Tussa, Pavan Mantripragada, Yianni Karabatis, Nirupam Roy, Yiannis Aloimonos
University of Maryland, College Park
arXiv 2026
FEEL teaser image
FEEL pairs egocentric video with force measurements to capture the physical causes—not just the visual effects—of hand-object interaction.

Videos


The person turns on the sink with their right hand and releases after a second of activity. Relevant force profile shown in BLUE.
The person gently brushes the countertop surface. Relevant force profile shown in BLUE.
The person presses the onions with a press maker. Relevant force profile shown in BLUE.

News


Abstract


We introduce FEEL (Force-Enhanced Egocentric Learning), the first large-scale dataset pairing force measurements gathered from custom piezoresistive gloves with egocentric video. Our gloves enable scalable data collection, and FEEL contains approximately 3 million force-synchronized frames of natural unscripted manipulation in kitchen environments, with 45% of frames involving hand-object contact. Because force is the underlying cause that drives physical interaction, it is a critical primitive for physical action understanding. We demonstrate the utility of force for physical action understanding through application of FEEL to two families of tasks: (1) contact understanding, where we jointly perform temporal contact segmentation and pixel-level contacted object segmentation; and, (2) action representation learning, where force prediction serves as a self-supervised pretraining objective for video backbones. We achieve state-of-the-art temporal contact segmentation results and competitive pixel-level segmentation results without any need for manual contacted object segmentation annotations. Furthermore we demonstrate that action representation learning with FEEL improves transfer performance on action understanding tasks without any manual labels over EPIC-Kitchens, SomethingSomething-V2, EgoExo4D and Meccano.

Paper


FEEL paper thumbnail
FEEL (Force-Enhanced Egocentric Learning): A Dataset for Physical Action Understanding
Eadom Dessalene, Botao He, Michael Maynord, Yonatan Tussa, Pavan Mantripragada, Yianni Karabatis, Nirupam Roy, Yiannis Aloimonos

Citation


@article{dessalene2026feel,
  title={FEEL: Force-Enhanced Egocentric Learning for Physical Action Understanding},
  author={Dessalene, Eadom and He, Botao and Maynord, Michael and Tussa, Yonatan and Mantripragada, Pavan and Karabatis, Yianni and Roy, Nirupam and Aloimonos, Yiannis},
  journal={arXiv},
  year={2026}
}