论文标题

通过粗化更快的图形嵌入

Faster Graph Embeddings via Coarsening

论文作者

Fahrbach, Matthew, Goranci, Gramoz, Peng, Richard, Sachdeva, Sushant, Wang, Chi

论文摘要

图形嵌入是用于机器学习任务的无处不在的工具,例如节点分类和链接预测,在图形结构化数据上。但是,即使我们仅对一小部分相关顶点感兴趣,计算大规模图的嵌入也无效。为了解决这个问题,我们基于Schur补充提出了一种有效的图形粗化方法,用于计算相关顶点的嵌入。我们证明,这些嵌入是由通过在非相关顶点上的高斯消除获得的Schur补体图保存的。由于计算SCHUR的补充很昂贵,因此我们提供了一种近乎线性的时间算法,该算法在相关的顶点上生成一个粗糙的图形,该顶点可证明与每种迭代中的Schur补充相匹配。我们的图表上涉及预测任务的实验表明,计算嵌入在粗图形上而不是整个图的嵌入会导致大量的时间节省,而无需牺牲准确性。

Graph embeddings are a ubiquitous tool for machine learning tasks, such as node classification and link prediction, on graph-structured data. However, computing the embeddings for large-scale graphs is prohibitively inefficient even if we are interested only in a small subset of relevant vertices. To address this, we present an efficient graph coarsening approach, based on Schur complements, for computing the embedding of the relevant vertices. We prove that these embeddings are preserved exactly by the Schur complement graph that is obtained via Gaussian elimination on the non-relevant vertices. As computing Schur complements is expensive, we give a nearly-linear time algorithm that generates a coarsened graph on the relevant vertices that provably matches the Schur complement in expectation in each iteration. Our experiments involving prediction tasks on graphs demonstrate that computing embeddings on the coarsened graph, rather than the entire graph, leads to significant time savings without sacrificing accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源