论文标题

自动噬剂:通过深入学习的随机森林中的HLS阶段订购

AutoPhase: Juggling HLS Phase Orderings in Random Forests with Deep Reinforcement Learning

论文作者

Huang, Qijing, Haj-Ali, Ameer, Moses, William, Xiang, John, Stoica, Ion, Asanovic, Krste, Wawrzynek, John

论文摘要

编译器生成的代码的性能取决于其应用优化通过的顺序。选择一个良好的订单 - 通常称为相位订购问题,是一个NP困难的问题。结果,现有的解决方案依靠各种启发式方法。在本文中,我们评估了一种解决相位订购问题的新技术:深度强化学习。为此,我们实现了AutoPhase:一个框架,该框架采用程序并使用深度强化学习来找到一系列汇编,以最大程度地减少其执行时间。在不失去一般性的情况下,我们在LLVM编译器工具链和目标高级合成程序的上下文中构建了此框架。我们使用随机森林来量化给定通行证的有效性与程序特征之间的相关性。这有助于我们通过避免不太可能提高给定程序性能的相位订购来减少搜索空间。我们比较了自动循环与解决相位问题的最新算法的性能。在我们的评估中,我们表明,与使用-O3编译器标志相比,AutoPhase将电路性能提高了28%,并且与最先进的解决方案相比,可以实现竞争成果,同时需要更少的样品。此外,与现有的最先进的解决方案不同,我们的深入强化学习解决方案在对一百个随机生成的程序进行培训后,在推广到真实基准和12,874个不同随机生成的程序方面表现出了令人鼓舞的结果。

The performance of the code a compiler generates depends on the order in which it applies the optimization passes. Choosing a good order--often referred to as the phase-ordering problem, is an NP-hard problem. As a result, existing solutions rely on a variety of heuristics. In this paper, we evaluate a new technique to address the phase-ordering problem: deep reinforcement learning. To this end, we implement AutoPhase: a framework that takes a program and uses deep reinforcement learning to find a sequence of compilation passes that minimizes its execution time. Without loss of generality, we construct this framework in the context of the LLVM compiler toolchain and target high-level synthesis programs. We use random forests to quantify the correlation between the effectiveness of a given pass and the program's features. This helps us reduce the search space by avoiding phase orderings that are unlikely to improve the performance of a given program. We compare the performance of AutoPhase to state-of-the-art algorithms that address the phase-ordering problem. In our evaluation, we show that AutoPhase improves circuit performance by 28% when compared to using the -O3 compiler flag, and achieves competitive results compared to the state-of-the-art solutions, while requiring fewer samples. Furthermore, unlike existing state-of-the-art solutions, our deep reinforcement learning solution shows promising result in generalizing to real benchmarks and 12,874 different randomly generated programs, after training on a hundred randomly generated programs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源