Question answering as an automatic evaluation metric for news article summarization
Recent work in the field of automatic summarization and headline generation focuses on
maximizing ROUGE scores for various news datasets. We present an alternative, extrinsic,
evaluation metric for this task, Answering Performance for Evaluation of Summaries. APES
utilizes recent progress in the field of reading-comprehension to quantify the ability of a
summary to answer a set of manually created questions regarding central entities in the
source article. We first analyze the strength of this metric by comparing it to known manual …
maximizing ROUGE scores for various news datasets. We present an alternative, extrinsic,
evaluation metric for this task, Answering Performance for Evaluation of Summaries. APES
utilizes recent progress in the field of reading-comprehension to quantify the ability of a
summary to answer a set of manually created questions regarding central entities in the
source article. We first analyze the strength of this metric by comparing it to known manual …
Recent work in the field of automatic summarization and headline generation focuses on maximizing ROUGE scores for various news datasets. We present an alternative, extrinsic, evaluation metric for this task, Answering Performance for Evaluation of Summaries. APES utilizes recent progress in the field of reading-comprehension to quantify the ability of a summary to answer a set of manually created questions regarding central entities in the source article. We first analyze the strength of this metric by comparing it to known manual evaluation metrics. We then present an end-to-end neural abstractive model that maximizes APES, while increasing ROUGE scores to competitive results.
arxiv.org