Answer-driven Deep Question Generation based on Reinforcement Learning

Liuyin Wang, Zihan Xu, Zibo Lin, Haitao Zheng, Ying Shen


Abstract
Deep question generation (DQG) aims to generate complex questions through reasoning over multiple documents. The task is challenging and underexplored. Existing methods mainly focus on enhancing document representations, with little attention paid to the answer information, which may result in the generated question not matching the answer type and being answerirrelevant. In this paper, we propose an Answer-driven Deep Question Generation (ADDQG) model based on the encoder-decoder framework. The model makes better use of the target answer as a guidance to facilitate question generation. First, we propose an answer-aware initialization module with a gated connection layer which introduces both document and answer information to the decoder, thus helping to guide the choice of answer-focused question words. Then a semantic-rich fusion attention mechanism is designed to support the decoding process, which integrates the answer with the document representations to promote the proper handling of answer information during generation. Moreover, reinforcement learning is applied to integrate both syntactic and semantic metrics as the reward to enhance the training of the ADDQG. Extensive experiments on the HotpotQA dataset show that ADDQG outperforms state-of-the-art models in both automatic and human evaluations.
Anthology ID:
2020.coling-main.452
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5159–5170
Language:
URL:
https://aclanthology.org/2020.coling-main.452
DOI:
10.18653/v1/2020.coling-main.452
Bibkey:
Cite (ACL):
Liuyin Wang, Zihan Xu, Zibo Lin, Haitao Zheng, and Ying Shen. 2020. Answer-driven Deep Question Generation based on Reinforcement Learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5159–5170, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Answer-driven Deep Question Generation based on Reinforcement Learning (Wang et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.452.pdf
Data
HotpotQA