论文标题

简要调查基于表示学习的图表降低技术

A Brief Survey on Representation Learning based Graph Dimensionality Reduction Techniques

论文作者

Akella, Akhil Pandey

论文摘要

维度降低技术的映射数据在较高维度上表示的较低维度以不同程度的信息丢失为止。图尺寸降低技术采用相同的原理,即提供图形结构的潜在表示以及对输出表示的较小适应以及输入数据。存在几种尖端技术,可以有效地从图形数据生成嵌入并将其投影到低维的潜在空间上。由于操作哲学的变化,特定图表降低技术的好处可能不会被证明对每种情况或每个数据集有利。结果,某些技术在较低维度处的节点之间的关系有效,而其他技术则擅长封装在低维空间上的整个图形结构。我们介绍了这项调查,以概述与现有图表降低技术相关的好处以及问题。我们还试图将有关潜在改进的点连接到某些技术。这项调查可能有助于即将到来的研究人员,有兴趣探索图表的使用学习,以有效地产生具有不同程度的粒度的低维图嵌入。

Dimensionality reduction techniques map data represented on higher dimensions onto lower dimensions with varying degrees of information loss. Graph dimensionality reduction techniques adopt the same principle of providing latent representations of the graph structure with minor adaptations to the output representations along with the input data. There exist several cutting edge techniques that are efficient at generating embeddings from graph data and projecting them onto low dimensional latent spaces. Due to variations in the operational philosophy, the benefits of a particular graph dimensionality reduction technique might not prove advantageous to every scenario or rather every dataset. As a result, some techniques are efficient at representing the relationship between nodes at lower dimensions, while others are good at encapsulating the entire graph structure on low dimensional space. We present this survey to outline the benefits as well as problems associated with the existing graph dimensionality reduction techniques. We also attempted to connect the dots regarding the potential improvements to some of the techniques. This survey could be helpful for upcoming researchers interested in exploring the usage of graph representation learning to effectively produce low-dimensional graph embeddings with varying degrees of granularity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源