EVJVQA CHALLENGE: MULTILINGUAL VISUAL QUESTION ANSWERING

Ngan Luu-Thuy Nguyen, Nghia Hieu Nguyen, Duong T.D. Vo, Khanh Quoc Tran, Kiet Van Nguyen
Author affiliations

Authors

  • Ngan Luu-Thuy Nguyen University of Information Technology, Vietnam National University
  • Nghia Hieu Nguyen University of Information Technology, Vietnam National University
  • Duong T.D. Vo University of Information Technology, Vietnam National University
  • Khanh Quoc Tran University of Information Technology, Vietnam National University
  • Kiet Van Nguyen University of Information Technology, Vietnam National University

DOI:

https://doi.org/10.15625/1813-9663/18157

Keywords:

Visual Question Answering, Vision-language understanding, MultiModal learning, Information fusion, Transformer model

Abstract

Visual Question Answering (VQA) is a challenging task of natural language processing (NLP) and computer vision (CV), attracting significant attention from researchers. English is a resource-rich language that has witnessed various developments in datasets and models for visual question answering. Visual question answering in other languages also would be developed for resources and models. In addition, there is no multilingual dataset targeting the visual content of a particular country with its own objects and cultural characteristics. To address the weakness, we provide the research community with a benchmark dataset named EVJVQA, including 33,000+ pairs of question-answer over three languages: Vietnamese, English, and Japanese, on approximately 5,000 images taken from Vietnam for evaluating multilingual VQA systems or models. EVJVQA is used as a benchmark dataset for the challenge of multilingual visual question answering at the 9th Workshop on Vietnamese Language and Speech Processing (VLSP 2022). This task attracted 62 participant teams from various universities and organizations. In this article, we present details of the organization of the challenge, an overview of the methods employed by shared-task participants, and the results. The highest performances are 0.4392 in F1-score and 0.4009 in BLUE on the private test set. The multilingual QA systems proposed by the top 2 teams use ViT for the pre-trained vision model and mT5 for the pre-trained language model, a powerful pre-trained language model based on the transformer architecture. EVJVQA is a challenging dataset that motivates NLP and CV researchers to further explore the multilingual models or systems for visual question answering systems.

Metrics

Metrics Loading ...

References

S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh, “Vqa: Visual question

answering,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2425–

H. Bao, L. Dong, and F. Wei, “Beit: Bert pre-training of image transformers,” arXiv preprint

arXiv:2106.08254, 2021.

Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” in Proceedings

of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6154–6162.

S. Changpinyo, D. Kukliansky, I. Szpektor, X. Chen, N. Ding, and R. Soricut, “All you may need for vqa

are image captions,” arXiv preprint arXiv:2205.01883, 2022.

S. Changpinyo, L. Xue, I. Szpektor, A. V. Thapliyal, J. Amelot, X. Chen, and R. Soricut, “Towards

multi-lingual visual question answering,” arXiv preprint arXiv:2209.05401, 2022.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transform-

ers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter

of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and

Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp.

–4186.

A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Min-

derer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition

at scale,” arXiv preprint arXiv:2010.11929, 2020.

H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu, “Are you talking to a machine? dataset and

methods for multilingual image question,” Advances in neural information processing systems, vol. 28,

Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh, “Making the v in vqa matter: Elevating

the role of image understanding in visual question answering,” in Proceedings of the IEEE conference

on computer vision and pattern recognition, 2017, pp. 6904–6913.

A. K. Gupta, “Survey of visual question answering: Datasets and techniques,” arXiv preprint

arXiv:1705.03865, 2017.

D. Gupta, P. Lenka, A. Ekbal, and P. Bhattacharyya, “A unified framework for multilingual and code-

mixed visual question answering,” in Proceedings of the 1st Conference of the Asia-Pacific Chapter of

the Association for Computational Linguistics and the 10th International Joint Conference on Natural

Language Processing, 2020, pp. 900–913.

D. Gurari, Q. Li, A. J. Stangl, A. Guo, C. Lin, K. Grauman, J. Luo, and J. P. Bigham, “Vizwiz grand

challenge: Answering visual questions from blind people,” in Proceedings of the IEEE conference on

computer vision and pattern recognition, 2018, pp. 3608–3617.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of

the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.

D. A. Hudson and C. D. Manning, “Gqa: A new dataset for real-world visual reasoning and composi-

tional question answering,” in Proceedings of the IEEE/CVF conference on computer vision and pattern

recognition, 2019, pp. 6700–6709.

H. Jiang, I. Misra, M. Rohrbach, E. Learned-Miller, and X. Chen, “In defense of grid features for vi-

sual question answering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern

Recognition, 2020, pp. 10 267–10 276. DOI: https://doi.org/10.2217/nmt-2020-0025

H. R. Khan, D. Gupta, and A. Ekbal, “Towards developing a multilingual and code-mixed visual question

answering system by knowledge distillation,” arXiv preprint arXiv:2109.04653, 2021.

C. Liu, J. Pfeiffer, A. Korhonen, I. Vulic, and I. Gurevych, “Delving deeper into cross-lingual visual

question answering,” arXiv preprint arXiv:2202.07630, 2022.

Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical

vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference

on Computer Vision, 2021, pp. 10 012–10 022.

K. Marino, M. Rastegari, A. Farhadi, and R. Mottaghi, “Ok-vqa: A visual question answering benchmark

requiring external knowledge,” in Proceedings of the IEEE/cvf conference on computer vision and pattern

recognition, 2019, pp. 3195–3204.

F. Nooralahzadeh and R. Sennrich, “Improving the cross-lingual generalisation in visual question an-

swering,” arXiv preprint arXiv:2209.02982, 2022.

K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine

translation,” in Proceedings of the 40th annual meeting of the Association for Computational Linguistics,

, pp. 311–318.

J. Pfeiffer, G. Geigle, A. Kamath, J.-M. O. Steitz, S. Roth, I. Vuli ́c, and I. Gurevych, “xgqa: Cross-lingual

visual question answering,” arXiv preprint arXiv:2109.06082, 2021.

L. Qi, S. Lv, H. Li, J. Liu, Y. Zhang, Q. She, H. Wu, H. Wang, and T. Liu, “Dureadervis: A: A chi-

nese dataset for open-domain document visual question answering,” in Findings of the Association for

Computational Linguistics: ACL 2022, 2022, pp. 1338–1351.

P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “Squad: 100,000+ questions for machine compre-

hension of text,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language

Processing, 2016, pp. 2383–2392.

M. Ren, R. Kiros, and R. Zemel, “Exploring models and data for image question answering,” Advances

in neural information processing systems, vol. 28, 2015.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region

proposal networks,” Advances in neural information processing systems, vol. 28, 2015.

O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,

M. Bernstein et al., “Imagenet large scale visual recognition challenge,” International journal of com-

puter vision, vol. 115, no. 3, pp. 211–252, 2015. DOI: https://doi.org/10.1007/s11263-015-0816-y

D. Schwenk, A. Khandelwal, C. Clark, K. Marino, and R. Mottaghi, “A-okvqa: A benchmark for visual

question answering using world knowledge,” arXiv preprint arXiv:2206.01718, 2022.

S. Sheng, A. Singh, V. Goswami, J. Magana, T. Thrush, W. Galuba, D. Parikh, and D. Kiela, “Human-

adversarial visual question answering,” Advances in Neural Information Processing Systems, vol. 34, pp.

346–20 359, 2021.

N. Shimizu, N. Rong, and T. Miyazaki, “Visual question answering dataset for bilingual image under-

standing: A study of cross-lingual transfer using attention maps,” in Proceedings of the 27th International

Conference on Computational Linguistics, 2018, pp. 1918–1928.

A. Singh, V. Natarajan, M. Shah, Y. Jiang, X. Chen, D. Batra, D. Parikh, and M. Rohrbach, “Towards

vqa models that can read,” in Proceedings of the IEEE/CVF conference on computer vision and pattern

recognition, 2019, pp. 8317–8326.

D. Teney, P. Anderson, X. He, and A. Van Den Hengel, “Tips and tricks for visual question answering:

Learnings from the 2017 challenge,” in Proceedings of the IEEE conference on computer vision and

pattern recognition, 2018, pp. 4223–4232.

K. Q. Tran, A. T. Nguyen, A. T.-H. Le, and K. Van Nguyen, “Vivqa: Vietnamese visual question answer-

ing,” in Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation,

, pp. 546–554.

A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin,

“Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.

T. Vu, D. Q. Nguyen, D. Q. Nguyen, M. Dras, and M. Johnson, “VnCoreNLP: A Vietnamese

natural language processing toolkit,” in Proceedings of the 2018 Conference of the North American

Chapter of the Association for Computational Linguistics: Demonstrations. New Orleans,

Louisiana: Association for Computational Linguistics, Jun. 2018, pp. 56–60. [Online]. Available:

https://aclanthology.org/N18-5012

Y. Zhu, O. Groth, M. Bernstein, and L. Fei-Fei, “Visual7w: Grounded question answering in images,” in

Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4995–5004.

Downloads

Published

26-09-2023

How to Cite

[1]
N. Luu-Thuy Nguyen, Nghia Hieu Nguyen, D. T.D. Vo, K. Q. Tran, and K. V. Nguyen, “EVJVQA CHALLENGE: MULTILINGUAL VISUAL QUESTION ANSWERING”, JCC, vol. 39, no. 3, p. 237–258, Sep. 2023.

Issue

Section

Articles