论文标题

通过强化学习生成多长度的摘要,以进行无监督的句子摘要

Generating Multiple-Length Summaries via Reinforcement Learning for Unsupervised Sentence Summarization

论文作者

Hyun, Dongmin, Wang, Xiting, Park, Chanyoung, Xie, Xing, Yu, Hwanjo

论文摘要

句子摘要缩短给定文本,同时维护文本的核心内容。已经研究了无监督的方法,以总结没有人工摘要的文本。但是,最近的无监督模型是挖掘的,它从文本中删除了单词,因此它们不如抽象性摘要灵活。在这项工作中,我们设计了一个基于强化学习的抽象模型,而无需基础真相摘要。我们根据马尔可夫决策过程制定了无监督的摘要,奖励代表摘要质量。为了进一步提高摘要质量,我们开发了一种多苏格里学习机制,该机制为给定文本产生多个长度的摘要,同时使摘要相互增强。实验结果表明,所提出的模型基本上优于抽象和提取模型,但经常生成输入文本中未包含的新单词。

Sentence summarization shortens given texts while maintaining core contents of the texts. Unsupervised approaches have been studied to summarize texts without human-written summaries. However, recent unsupervised models are extractive, which remove words from texts and thus they are less flexible than abstractive summarization. In this work, we devise an abstractive model based on reinforcement learning without ground-truth summaries. We formulate the unsupervised summarization based on the Markov decision process with rewards representing the summary quality. To further enhance the summary quality, we develop a multi-summary learning mechanism that generates multiple summaries with varying lengths for a given text, while making the summaries mutually enhance each other. Experimental results show that the proposed model substantially outperforms both abstractive and extractive models, yet frequently generating new words not contained in input texts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源