Learning Generalizable Dexterous Manipulation from Human Grasp Affordance

UC San Diego


Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics. While recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning, the learned policy can hardly generalize to manipulate novel objects, given limited expert demonstrations. In this paper, we propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category, which are generated from a human grasp affordance model. This generalizes the policy to novel object instances within the same category. To train the policy, we propose a novel imitation learning objective jointly with a geometric representation learning objective using our demonstrations. By experimenting with relocating diverse objects in simulation, we show that our approach outperforms baselines with a large margin when manipulating novel objects. We also ablate the importance on 3D object representation learning for manipulation.


We leverage a state-of-the-art affordance model GraspCVAE to generate diverse grasps on diverse objects within the same category. With the generated grasps, we utilize motion planning to obtain trajectories to reach these grasps. While these trajectories do not show how to perform a particular task, they can serve as partial demonstrations for guiding our policy to achieve the right contacts in grasping.

We propose a novel imitation learning objective to learn the policy with the affordance demonstrations. As we utilize a PointNet encoder to extract the 3D object shape information, we propose a 3D geometric representation learning approach jointly with imitation learning.

Visualizations of Demonstrations

We show the visualization of our demonstrations. Top left corner shows the target human grasp.

Comparison to DAPG on unseen objects

We compare our method to DAPG on relocating different unseen objects. Green object is the target.


                    title={Learning Generalizable Dexterous Manipulation from Human Grasp Affordance},
                    author={Wu, Yueh-Hua and Wang, Jiashun and Wang, Xiaolong},