TARN: Temporal Attentive Relation Network for Few-Shot and Zero-Shot Action Recognition

Bishay Mina (Queen Mary University London), Georgios Zoumpourlis (Queen Mary University of London), Ioannis Patras (Queen Mary University of London)

Abstract
In this paper we propose a novel Temporal Attentive Relation Network (TARN) for the problems of few-shot and zero-shot action recognition. At the heart of our network is a meta-learning approach that learns to compare representations of variable temporal length, that is, either two videos of different length (in the case of few-shot action recognition) or a video and a semantic representation such as word vector (in the case of zero-shot action recognition). By contrast to other works in few-shot and zero-shot action recognition, we a) utilise attention mechanisms so as to perform temporal alignment, and b) learn a deep-distance measure on the aligned representations at video segment level. We adopt an episode-based training scheme and train our network in an end-to-end manner. The proposed method does not require any fine-tuning in the target domain or maintaining additional representations as is the case of memory networks. Experimental results show that the proposed architecture outperforms the state of the art in few-shot action recognition, and achieves competitive results in zero-shot action recognition.

DOI
10.5244/C.33.130
https://dx.doi.org/10.5244/C.33.130

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={TARN: Temporal Attentive Relation Network for Few-Shot and Zero-Shot Action Recognition},
author={Bishay Mina and Georgios Zoumpourlis and Ioannis Patras},
year={2019},
month={September},
pages={130.1--130.14},
articleno={130},
numpages={14},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.130},
url={https://dx.doi.org/10.5244/C.33.130}
}