论文标题

多模式大脑网络的深度表示学习

Deep Representation Learning For Multimodal Brain Networks

论文作者

Zhang, Wen, Zhan, Liang, Thompson, Paul, Wang, Yalin

论文摘要

在现代医学成像分析中,应用网络科学方法研究人脑的功能和解剖结构很普遍。由于复杂的网络拓扑结构,对于单个大脑,从多模式大脑网络挖掘判别网络表示是非平凡的。深度学习技术在图形结构数据上的最新成功提出了一种建模非线性交叉模式关系的新方法。但是,当前的深脑网络方法要么忽略固有的图形拓扑,要么需要组中共享的网络基础。为了应对这些挑战,我们提出了一种新颖的端到端深度图表示学习(深度多模式的大脑网络-DMBN),以融合多模式的大脑网络。具体而言,我们通过图形编码和解码过程破译了交叉模式关系。在节点域中学习了从大脑结构网络到功能网络的高阶网络映射。学识渊博的网络表示是一组节点特征,这些特征有用,可以以监督的方式诱导大脑显着图。我们在合成图像和真实图像数据中测试我们的框架。实验结果表明,所提出的方法比其他一些最先进的深脑网络模型的优越性。

Applying network science approaches to investigate the functions and anatomy of the human brain is prevalent in modern medical imaging analysis. Due to the complex network topology, for an individual brain, mining a discriminative network representation from the multimodal brain networks is non-trivial. The recent success of deep learning techniques on graph-structured data suggests a new way to model the non-linear cross-modality relationship. However, current deep brain network methods either ignore the intrinsic graph topology or require a network basis shared within a group. To address these challenges, we propose a novel end-to-end deep graph representation learning (Deep Multimodal Brain Networks - DMBN) to fuse multimodal brain networks. Specifically, we decipher the cross-modality relationship through a graph encoding and decoding process. The higher-order network mappings from brain structural networks to functional networks are learned in the node domain. The learned network representation is a set of node features that are informative to induce brain saliency maps in a supervised manner. We test our framework in both synthetic and real image data. The experimental results show the superiority of the proposed method over some other state-of-the-art deep brain network models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源