Adaptive Graphical Model Network for 2D Handpose Estimation

Deying Kong (University of California, Irvine), Yifei Chen (Tencent), Haoyu Ma (Southeast University), Xiangyi Yan (Southern University of Science and Technology), Xiaohui Xie (University of California, Irvine)

Abstract
In this paper, we propose a new architecture called Adaptive Graphical Model Network (AGMN) to tackle the challenging task of 2D hand pose estimation from a monocular RGB image. The AGMN consists of two branches of deep convolutional neural networks (DCNNs) for calculating unary and pairwise potential functions, followed by a graphical model inference module for integrating unary and pairwise potentials. Unlike existing architectures proposed to combine DCNNs with graphical models, our AGMN is novel in that the parameters of its graphical model are conditioned on and fully adaptive to individual input images. Experiments show that our approach outperforms the state-of-the-art method used in 2D hand keypoints estimation by a notable margin on two public datasets.

DOI
10.5244/C.33.174
https://dx.doi.org/10.5244/C.33.174

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={Adaptive Graphical Model Network for 2D Handpose Estimation},
author={Deying Kong and Yifei Chen and Haoyu Ma and Xiangyi Yan and Xiaohui Xie},
year={2019},
month={September},
pages={174.1--174.13},
articleno={174},
numpages={13},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.174},
url={https://dx.doi.org/10.5244/C.33.174}
}