论文标题

测量大型语言模型可扩展监督的进度

Measuring Progress on Scalable Oversight for Large Language Models

论文作者

Bowman, Samuel R., Hyun, Jeeyoon, Perez, Ethan, Chen, Edwin, Pettit, Craig, Heiner, Scott, Lukošiūtė, Kamilė, Askell, Amanda, Jones, Andy, Chen, Anna, Goldie, Anna, Mirhoseini, Azalia, McKinnon, Cameron, Olah, Christopher, Amodei, Daniela, Amodei, Dario, Drain, Dawn, Li, Dustin, Tran-Johnson, Eli, Kernion, Jackson, Kerr, Jamie, Mueller, Jared, Ladish, Jeffrey, Landau, Joshua, Ndousse, Kamal, Lovitt, Liane, Elhage, Nelson, Schiefer, Nicholas, Joseph, Nicholas, Mercado, Noemí, DasSarma, Nova, Larson, Robin, McCandlish, Sam, Kundu, Sandipan, Johnston, Scott, Kravec, Shauna, Showk, Sheer El, Fort, Stanislav, Telleen-Lawton, Timothy, Brown, Tom, Henighan, Tom, Hume, Tristan, Bai, Yuntao, Hatfield-Dodds, Zac, Mann, Ben, Kaplan, Jared

论文摘要

开发安全且有用的通用AI系统将要求我们在可扩展的监督上取得进展:监督系统的问题,这些问题可能在与手头任务相关的大多数技能上的表现都优于我们。关于这个问题的经验工作并不简单,因为我们尚未具有广泛超过我们能力的系统。本文讨论了我们考虑这个问题的主要方式之一,重点是可以从经验上研究它。我们首先提出了一个实验设计,该设计集中在人类专家成功的任务上,但无助的人类和当前的一般AI系统失败。然后,我们提出了概念验证实验,旨在证明这种实验设计的关键特征,并通过两个提问任务(MMLU和时间限制的质量)展示其生存能力。在这些任务上,我们发现与不可靠的大型语言模型对话助理互动的人参与者(一种可扩展监督的微不足道的基线策略)大大胜过模型和他们自己的无助绩效。这些结果令人鼓舞地表明,可扩展的监督可以研究目前的模型并加强最新的发现,即大型语言模型可以有效地帮助人类完成困难的任务。

Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on ways it can be studied empirically. We first present an experimental design centered on tasks for which human specialists succeed but unaided humans and current general AI systems fail. We then present a proof-of-concept experiment meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat -- a trivial baseline strategy for scalable oversight -- substantially outperform both the model alone and their own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源