论文标题

ProQA:基于结构及时的统一问题回答的预培训

ProQA: Structural Prompt-based Pre-training for Unified Question Answering

论文作者

Zhong, Wanjun, Gao, Yifan, Ding, Ning, Qin, Yujia, Liu, Zhiyuan, Zhou, Ming, Wang, Jiahai, Yin, Jian, Duan, Nan

论文摘要

问题回答(QA)是自然语言处理的长期挑战。现有的质量检查主要关注特定问题类型,知识领域或推理技能。质量检查研究的专业阻碍了系统对任务和更广泛应用的概括之间的共同点进行建模。为了解决此问题,我们提出了ProQa,这是一个统一的QA范式,可以通过单个模型解决各种任务。 ProQA采用统一的结构提示,因为桥梁并通过基于结构及时的预训练提高了以质量为中心的能力。 ProQA通过结构设计的基于及时设计的输入架构,同时为所有质量检查任务的知识概括建模,同时为每个特定的QA任务提供知识自定义。此外,ProQA由结构及时形成的大规模合成语料库进行预训练,这使该模型具有通常的质量检查能力。 11 QA基​​准测试的实验结果表明,ProQA始终提高完整数据微调,少量学习和零拍测试方案的性能。此外,ProQA通过占据结构提示的优势在持续学习和转移学习方面表现出强大的能力。

Question Answering (QA) is a longstanding challenge in natural language processing. Existing QA works mostly focus on specific question types, knowledge domains, or reasoning skills. The specialty in QA research hinders systems from modeling commonalities between tasks and generalization for wider applications. To address this issue, we present ProQA, a unified QA paradigm that solves various tasks through a single model. ProQA takes a unified structural prompt as the bridge and improves the QA-centric ability by structural prompt-based pre-training. Through a structurally designed prompt-based input schema, ProQA concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task. Furthermore, ProQA is pre-trained with structural prompt-formatted large-scale synthesized corpus, which empowers the model with the commonly-required QA ability. Experimental results on 11 QA benchmarks demonstrate that ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源