Comparing Attention-Based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension

Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, Ngoc Thang Vu


Abstract
We propose a machine reading comprehension model based on the compare-aggregate framework with two-staged attention that achieves state-of-the-art results on the MovieQA question answering dataset. To investigate the limitations of our model as well as the behavioral difference between convolutional and recurrent neural networks, we generate adversarial examples to confuse the model and compare to human performance. Furthermore, we assess the generalizability of our model by analyzing its differences to human inference, drawing upon insights from cognitive science.
Anthology ID:
K18-1011
Volume:
Proceedings of the 22nd Conference on Computational Natural Language Learning
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Anna Korhonen, Ivan Titov
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
108–118
Language:
URL:
https://aclanthology.org/K18-1011
DOI:
10.18653/v1/K18-1011
Bibkey:
Cite (ACL):
Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, and Ngoc Thang Vu. 2018. Comparing Attention-Based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 108–118, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Comparing Attention-Based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension (Blohm et al., CoNLL 2018)
Copy Citation:
PDF:
https://aclanthology.org/K18-1011.pdf
Code
 DigitalPhonetics/reading-comprehension
Data
MovieQASQuAD