论文标题

神经语言模型的财产归纳框架

A Property Induction Framework for Neural Language Models

论文作者

Misra, Kanishka, Rayz, Julia Taylor, Ettinger, Allyson

论文摘要

从语言中可以在多大程度上有助于我们的概念知识?这个问题的计算探索已经阐明了强大的神经语言模型(LMS)的能力(仅通过文本输入来告知),以编码和获取有关概念和属性的信息。为了扩展这一研究,我们提出了一个框架,该框架使用神经网络语言模型(LMS)执行财产归纳 - 这项任务是,人类将新颖的财产知识(具有芝麻骨)从一个或多个概念(robins)推广到其他概念(sparrows,Canaries,Canaries)。在人类中观察到的财产归纳模式已经对人类概念知识的性质和组织有了很大的了解。受到这种见识的启发,我们使用框架探索LMS的性质归纳,并发现它们表现出基于类别成员资格的新型属性的归纳偏好,这表明其表示中存在分类偏见。

To what extent can experience from language contribute to our conceptual knowledge? Computational explorations of this question have shed light on the ability of powerful neural language models (LMs) -- informed solely through text input -- to encode and elicit information about concepts and properties. To extend this line of research, we present a framework that uses neural-network language models (LMs) to perform property induction -- a task in which humans generalize novel property knowledge (has sesamoid bones) from one or more concepts (robins) to others (sparrows, canaries). Patterns of property induction observed in humans have shed considerable light on the nature and organization of human conceptual knowledge. Inspired by this insight, we use our framework to explore the property inductions of LMs, and find that they show an inductive preference to generalize novel properties on the basis of category membership, suggesting the presence of a taxonomic bias in their representations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源