Pose from Shape: Deep Pose Estimation for Arbitrary 3D Objects

Yang Xiao (École des ponts ParisTech), Xuchong Qiu (École des Ponts ParisTech), Pierre-Alain Langlois (École des Ponts ParisTech), Mathieu Aubry (École des ponts ParisTech), Renaud Marlet (École des Ponts ParisTech)

Most deep pose estimation methods need to be trained for specific object instances or categories. In this work we propose a completely generic deep pose estimation approach, which does not require the network to have been trained on relevant categories, nor objects in a category to have a canonical pose. We believe this is a crucial step to design robotic systems that can interact with new objects “in the wild” not belonging to a predefined category. Our main insight is to dynamically condition pose estimation with a representation of the 3D shape of the target object. More precisely, we train a Convolutional Neural Network that takes as input both a test image and a 3D model, and outputs the relative 3D pose of the object in the input image with respect to the 3D model. We demonstrate that our method boosts performances for supervised category pose estimation on standard benchmarks, namely Pascal3D+, ObjectNet3D and Pix3D, on which we provide results superior to the state of the art. More importantly, we show that our network trained on everyday man-made objects from ShapeNet generalizes without any additional training to completely new types of 3D objects by providing results on the LINEMOD dataset as well as on natural entities such as animals from ImageNet.


Paper (PDF)
Supplementary material (PDF)

title={Pose from Shape: Deep Pose Estimation for Arbitrary 3D Objects},
author={Yang Xiao and Xuchong Qiu and Pierre-Alain Langlois and Mathieu Aubry and Renaud Marlet},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},