论文标题

GSUM:指导神经抽象摘要的一般框架

GSum: A General Framework for Guided Neural Abstractive Summarization

论文作者

Dou, Zi-Yi, Liu, Pengfei, Hayashi, Hiroaki, Jiang, Zhengbao, Neubig, Graham

论文摘要

神经抽象的摘要模型是灵活的,可以产生连贯的摘要,但是它们有时是不忠实的,并且很难控制。尽管以前的研究试图提供不同类型的指导来控制产出并增加忠诚,但尚不清楚这些策略如何相互比较和对比。在本文中,我们提出了一个通用且可扩展的指导摘要框架(GSUM),该框架可以有效地将不同种类的外部指导作为输入,并且我们在几种不同的品种中进行实验。实验表明,该模型是有效的,在使用突出显示的句子作为指导时,根据Rouge在4个流行的摘要数据集上实现最新性能。此外,我们表明我们的指导模型可以产生更忠实的摘要,并证明不同类型的指导如何产生定性不同的摘要,从而为学习模型提供一定程度的可控性。

Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control. While previous studies attempt to provide different types of guidance to control the output and increase faithfulness, it is not clear how these strategies compare and contrast to each other. In this paper, we propose a general and extensible guided summarization framework (GSum) that can effectively take different kinds of external guidance as input, and we perform experiments across several different varieties. Experiments demonstrate that this model is effective, achieving state-of-the-art performance according to ROUGE on 4 popular summarization datasets when using highlighted sentences as guidance. In addition, we show that our guided model can generate more faithful summaries and demonstrate how different types of guidance generate qualitatively different summaries, lending a degree of controllability to the learned models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源