Video action recognition, as a critical problem in video understanding, has been gaining increasing attention. To identify actions induced by complex object-object interactions, we need to consider not only spatial relations among objects in a single frame but also temporal relations among different or the same objects across multiple frames. However, existing approaches modeling video representations and non-local features are either incapable of explicitly modeling relations at the object-object level or unable to handle streaming videos. In this paper, we propose a novel dynamic hidden graph module to model complex object-object interactions in videos, of which two instantiations are considered: a visual graph that captures appearance/motion changes among objects and a location graph that captures relative spatiotemporal position changes among objects. Besides, the proposed graph module allows us to process streaming videos, setting it apart from existing methods. Experimental results on two benchmark datasets, Something-Something and ActivityNet, show the competitive performance of our methods.
Supplementary material (PDF)