论文标题
图灵陷阱:类似人工智能的承诺与危险
The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence
论文作者
论文摘要
1950年,艾伦·图灵(Alan Turing)提出了一个模仿游戏,作为机器是否智能的最终测试:机器能否模仿人类如此之所以模仿,以至于它与人类无法区分的问题的答案。从那时起,创造与人类智能相匹配的情报一直是成千上万的研究人员,工程师和企业家的目标。人类式人工智能(HLAI)的好处包括生产力高昂,休闲增加,也许最深刻地了解我们自己的思想。 但是,并非所有类型的人工智能都像人类一样。实际上,许多最强大的系统与人类大不相同。因此,过度专注于开发和部署HLAI可以使我们陷入陷阱。随着机器成为人工劳动的更好替代品,工人将失去经济和政治谈判权,并越来越依赖控制技术的人。相反,当AI专注于增强人类而不是模仿人时,人类保留了坚持所创造价值的份额的权力。此外,增强创造了新的功能和新产品和服务,最终产生的价值远远超过了类似人类的AI。尽管这两种类型的AI都可以非常有益,但目前在技术人员,商业主管和决策者中,自动化而不是增强的激励措施过多。
In 1950, Alan Turing proposed an imitation game as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions indistinguishable from a human. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds. But not all types of AI are human-like. In fact, many of the most powerful systems are very different from humans. So an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. Furthermore, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers.