Embodied Vision-and-Language Navigation with Dynamic Convolutional Filters

Federico Landi (University of Modena and Reggio Emilia), Lorenzo Baraldi (University of Modena and Reggio Emilia), Massimiliano Corsini (University of Modena and Reggio Emilia), Rita Cucchiara (University of Modena and Reggio Emilia)

In Vision-and-Language Navigation (VLN), an embodied agent needs to reach a target destination with the only guidance of a natural language instruction. To explore the environment and progress towards the target location, the agent must perform a series of low-level actions, such as rotate, before stepping ahead. In this paper, we propose to exploit dynamic convolutional filters to encode the visual information and the lingual description in an efficient way. Differently from some previous works that abstract from the agent perspective and use high-level navigation spaces, we design a policy which decodes the information provided by dynamic convolution into a series of low-level, agent friendly actions. Results show that our model exploiting dynamic filters performs better than other architectures with traditional convolution, being the new state of the art for embodied VLN in the low-level action space. Additionally, we attempt to categorize recent work on VLN depending on their architectural choices and distinguish two main groups: we call them low-level actions and high-level actions models. To the best of our knowledge, we are the first to propose this analysis and categorization for VLN.


Paper (PDF)
Supplementary material (ZIP)

title={Embodied Vision-and-Language Navigation with Dynamic Convolutional Filters},
author={Federico Landi and Lorenzo Baraldi and Massimiliano Corsini and Rita Cucchiara},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},