论文标题
挑战封闭式科学考试:基于元学习的问题答案系统
Challenge Closed-book Science Exam: A Meta-learning Based Question Answering System
论文作者
论文摘要
标准化科学考试的先前工作需要大型文本语料库的支持,例如有针对性的科学语料库或SimpleWikipedia。但是,从大型语料库中检索知识是耗时的,嵌入复杂语义表示中的问题可能会干扰检索。受认知科学的双重过程理论的启发,我们提出了一个元数据框架,其中系统1是一个直观的元分类器,系统2是一个推理模块。具体而言,我们的方法基于元学习方法和大语言模型BERT,可以通过从相关的示例问题中学习而不依赖外部知识基础来有效地解决科学问题。我们在AI2推理挑战(ARC)上评估了我们的方法,实验结果表明,元分类器在新兴问题类型上产生相当大的分类性能。元分类器提供的信息显着将推理模块的准确性从46.6%提高到64.2%,这比基于检索的QA方法具有竞争优势。
Prior work in standardized science exams requires support from large text corpus, such as targeted science corpus fromWikipedia or SimpleWikipedia. However, retrieving knowledge from the large corpus is time-consuming and questions embedded in complex semantic representation may interfere with retrieval. Inspired by the dual process theory in cognitive science, we propose a MetaQA framework, where system 1 is an intuitive meta-classifier and system 2 is a reasoning module. Specifically, our method based on meta-learning method and large language model BERT, which can efficiently solve science problems by learning from related example questions without relying on external knowledge bases. We evaluate our method on AI2 Reasoning Challenge (ARC), and the experimental results show that meta-classifier yields considerable classification performance on emerging question types. The information provided by meta-classifier significantly improves the accuracy of reasoning module from 46.6% to 64.2%, which has a competitive advantage over retrieval-based QA methods.