论文标题
上下文调整:学习自然语言生成的上下文化提示
Context-Tuning: Learning Contextualized Prompts for Natural Language Generation
论文作者
论文摘要
最近,验证的语言模型(PLM)在语言产生方面取得了出色的成功。为了利用PLM编码的丰富知识,一个简单而强大的范式是使用离散令牌或连续嵌入形式的提示。在现有的研究中,这些提示方法通常与输入无关,缺乏足够的输入语义考虑。为了解决这个问题,我们提出了一种新颖的连续提示方法,称为上下文调节,以微调自然语言生成。首先,提示是基于输入文本得出的,以从PLM中获取有用的知识进行生成。我们将此类提示称为上下文化的提示。其次,我们使用持续的反向提示来通过对从输出到输入的反相反生成过程进行建模,从而改善自然语言生成的过程,从而使生成的文本与输入更加相关。此外,我们使用一种轻巧的上下文调节方法,该方法仅微调参数的0.12%,同时保持良好的性能。我们的代码可在https://github.com/rucaibox/context-tuning上公开获取。
Recently, pretrained language models (PLMs) have had exceptional success in language generation. To leverage the rich knowledge encoded by PLMs, a simple yet powerful paradigm is to use prompts in the form of either discrete tokens or continuous embeddings. In existing studies, these prompting methods are typically independent of the inputs, lacking sufficient consideration of input semantics. To address this issue, we propose a novel continuous prompting approach, called context-tuning, to fine-tuning PLMs for natural language generation. Firstly, the prompts are derived based on the input text to elicit useful knowledge from PLMs for generation. We refer to such prompts as contextualized prompts. Secondly, we use continuous inverse prompting to improve the process of natural language generation by modeling an inverse generation process from output to input, making the generated text more relevant to the inputs. Furthermore, we utilize a lightweight context-tuning method that fine-tunes only 0.12% of the parameters while maintaining good performance. Our code is publicly available at https://github.com/RUCAIBox/Context-Tuning.