论文标题
CODET:生成测试的代码生成
CodeT: Code Generation with Generated Tests
论文作者
论文摘要
为给定编程问题生成代码解决方案的任务可以从使用预训练的语言模型(例如Codex)中受益,该模型可以产生多种样本。但是,这项任务的主要挑战是从预先训练的语言模型生成的多个样本中选择最合适的解决方案。评估代码解决方案质量和正确性的一种自然方法是对一组测试用例进行操作,但是手动创建此类测试用例通常是昂贵且耗时的。在本文中,我们提出了一种新颖的方法Codet,该方法利用相同的预训练的语言模型自动为代码样本生成测试用例,从而减少了人类的努力并增加了测试场景的覆盖范围。然后,CODET使用生成的测试用例执行代码样本,并执行双重执行协议,该协议既考虑输出与生成的测试用例的一致性,又考虑输出与其他代码样本的一致性。我们使用五个不同大小和功能的不同预训练的语言模型对四个基准测试,MBPP,应用程序和CodeContests进行了全面的实验。我们的结果表明,Codet可以显着提高代码解决方案选择的性能,而不是以前的方法,从而在不同的模型和基准中实现了显着和一致的收益。例如,CODET将人类VAL的通行证提高到65.8%,这比Code-Davinci-002模型的绝对提高了18.8%,并且绝对提高了超过20%以上的最先前的结果。
The task of generating code solutions for a given programming problem can benefit from the use of pre-trained language models such as Codex, which can produce multiple diverse samples. However, a major challenge for this task is to select the most appropriate solution from the multiple samples generated by the pre-trained language models. A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming. In this paper, we propose a novel method, CodeT, that leverages the same pre-trained language models to automatically generate test cases for the code samples, thus reducing the human effort and increasing the coverage of the test scenarios. CodeT then executes the code samples using the generated test cases, and performs a dual execution agreement, which considers both the consistency of the outputs against the generated test cases and the agreement of the outputs with other code samples. We conduct comprehensive experiments on four benchmarks, HumanEval, MBPP, APPS and CodeContests, using five different pre-trained language models with varying sizes and capabilities. Our results show that CodeT can significantly improve the performance of code solution selection over previous methods, achieving remarkable and consistent gains across different models and benchmarks. For instance, CodeT improves the pass@1 metric on HumanEval to 65.8%, which represents an absolute improvement of 18.8% over the code-davinci-002 model, and an absolute improvement of more than 20% over the previous state-of-the-art results.