An Empirical Study on Leveraging Scene Graphs for Visual Question Answering

Cheng Zhang (Ohio State University), Wei-Lun Chao (Cornell University), Dong Xuan (Ohio State University)

Abstract
Visual question answering (Visual QA) has attracted significant attention these years. While a variety of algorithms have been proposed, most of them are built upon different combinations of image and language features as well as multi-modal attention and fusion. In this paper, we investigate an alternative approach inspired by conventional QA systems that operate on knowledge graphs. Specifically, we investigate the use of scene graphs derived from images for Visual QA: an image is abstractly represented by a graph with nodes corresponding to object entities and edges to object relationships. We adapt the recently proposed graph network (GN) to encode the scene graph and perform structured reasoning according to the input question. Our empirical studies demonstrate that scene graphs can already capture essential information of images and graph networks have the potential to outperform state-of-the-art Visual QA algorithms but with a much cleaner architecture. By analyzing the features generated by GNs we can further interpret the reasoning process, suggesting a promising direction towards explainable Visual QA.

DOI
10.5244/C.33.151
https://dx.doi.org/10.5244/C.33.151

Files
Paper (PDF)
Supplementary material (PDF)

BibTeX
@inproceedings{BMVC2019,
title={An Empirical Study on Leveraging Scene Graphs for Visual Question Answering},
author={Cheng Zhang and Wei-Lun Chao and Dong Xuan},
year={2019},
month={September},
pages={151.1--151.14},
articleno={151},
numpages={14},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.151},
url={https://dx.doi.org/10.5244/C.33.151}
}