Relational Affordance Learning for Robot Manipulation

Talk
David Held
Talk Series: 
Time: 
03.27.2023 11:00 to 12:00

Robots today are typically confined to interact with rigid, opaque objects with known object models. However, the objects in our daily lives are often non-rigid, can be transparent or reflective, and are diverse in shape and appearance. I argue that, to enhance the capabilities of robots, we should develop perception methods that estimate what robots need to know to interact with the world. Specifically, I will present novel perception methods that estimate “relational affordances”: task-specific geometric relationships between objects that allow a robot to determine what actions it needs to take to complete a task. These estimated relational affordances can enable robots to perform complex tasks such as manipulating cloth, articulated objects, grasping transparent and reflective objects, and other manipulation tasks, generalizing to unseen objects in a category and unseen object configurations. By reasoning about relational affordances, we can achieve robust performance on difficult robot manipulation tasks.