论文标题

Rubik:用于高效图形学习的层次结构

Rubik: A Hierarchical Architecture for Efficient Graph Learning

论文作者

Chen, Xiaobing, Wang, Yuke, Xie, Xinfeng, Hu, Xing, Basak, Abanti, Liang, Ling, Yan, Mingyu, Deng, Lei, Ding, Yufei, Du, Zidong, Chen, Yunji, Xie, Yuan

论文摘要

图形卷积网络(GCN)是一个有前途的方向,是学习在广泛应用程序(例如电子商务,社交网络和知识图)中常用的图形数据中的归纳表示。但是,从图形中学习是非平凡的,因为其混合计算模型涉及图分析和神经网络计算。为此,我们将GCN学习分解为两个层次结构范式:图形和节点级计算。这样的层次结构范式有助于用于GCN学习的软件和硬件加速度。 我们提出了一种轻巧的图形重新排序方法,并与GCN加速器架构结合在一起,该架构为定制的高速缓存设计,以充分利用图形级别的数据重复使用。我们还提出了一种映射方法,了解数据重用和任务级并行性,以有效地处理各种图形输入。结果表明,与不同数据集和GCN模型相比,Rubik Accelerator设计比GPU平台提高了26.3倍至1375.2倍。

Graph convolutional network (GCN) emerges as a promising direction to learn the inductive representation in graph data commonly used in widespread applications, such as E-commerce, social networks, and knowledge graphs. However, learning from graphs is non-trivial because of its mixed computation model involving both graph analytics and neural network computing. To this end, we decompose the GCN learning into two hierarchical paradigms: graph-level and node-level computing. Such a hierarchical paradigm facilitates the software and hardware accelerations for GCN learning. We propose a lightweight graph reordering methodology, incorporated with a GCN accelerator architecture that equips a customized cache design to fully utilize the graph-level data reuse. We also propose a mapping methodology aware of data reuse and task-level parallelism to handle various graphs inputs effectively. Results show that Rubik accelerator design improves energy efficiency by 26.3x to 1375.2x than GPU platforms across different datasets and GCN models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源