论文标题
ANSOR:生成高性能张量程序以进行深度学习
Ansor: Generating High-Performance Tensor Programs for Deep Learning
论文作者
论文摘要
高性能张量程序对于确保有效执行深神经网络至关重要。但是,众所周知,在各种硬件平台上为不同操作员获得性能张量程序是具有挑战性的。当前,深度学习系统依靠提供供应商提供的内核库或各种搜索策略来获取性能张量程序。这些方法要么需要大量的工程工作来开发特定于平台的优化代码,要么由于搜索空间受到限制和无效的勘探策略而无法找到高性能计划。 我们为深度学习应用程序介绍了Ansor,这是一个张量的程序生成框架。与现有的搜索策略相比,Ansor通过从搜索空间的层次结构表示来探索了更多的优化组合。然后,Ansor通过进化搜索和学习的成本模型微调采样程序,以识别最佳程序。 Ansor可以找到在现有最新方法的搜索空间之外的高性能程序。此外,Ansor利用任务调度程序同时优化了深神经网络中的多个子图。我们表明,Ansor相对于Intel CPU,ARM CPU和NVIDIA GPU上最新的Deep神经网络的执行性能分别高达$ 3.8 \ times $,$ 2.6 \ times $和$ 1.7 \ times $。
High-performance tensor programs are crucial to guarantee efficient execution of deep neural networks. However, obtaining performant tensor programs for different operators on various hardware platforms is notoriously challenging. Currently, deep learning systems rely on vendor-provided kernel libraries or various search strategies to get performant tensor programs. These approaches either require significant engineering effort to develop platform-specific optimization code or fall short of finding high-performance programs due to restricted search space and ineffective exploration strategy. We present Ansor, a tensor program generation framework for deep learning applications. Compared with existing search strategies, Ansor explores many more optimization combinations by sampling programs from a hierarchical representation of the search space. Ansor then fine-tunes the sampled programs with evolutionary search and a learned cost model to identify the best programs. Ansor can find high-performance programs that are outside the search space of existing state-of-the-art approaches. In addition, Ansor utilizes a task scheduler to simultaneously optimize multiple subgraphs in deep neural networks. We show that Ansor improves the execution performance of deep neural networks relative to the state-of-the-art on the Intel CPU, ARM CPU, and NVIDIA GPU by up to $3.8\times$, $2.6\times$, and $1.7\times$, respectively.