论文标题

通过Multiview ICA在神经影像研究中的共同响应建模

Modeling Shared Responses in Neuroimaging Studies through MultiView ICA

论文作者

Richard, Hugo, Gresele, Luigi, Hyvärinen, Aapo, Thirion, Bertrand, Gramfort, Alexandre, Ablin, Pierre

论文摘要

涉及大量受试者的小组研究对于得出有关大脑功能组织的一般结论很重要。但是,来自多个受试者的数据的汇总是具有挑战性的,因为它需要考虑解剖学,功能形态和刺激反应的巨大变化。对于在生态相关条件(例如电影观看)的情况下,数据建模尤其困难,在这种情况下,实验设置并不意味着定义明确的认知操作。 我们为小组研究提出了一种新型的多视图组件分析(ICA)模型,其中每个受试者的数据被建模为共享独立源和噪声的线性组合。与大多数组-ICA程序相反,模型的可能性以封闭形式获得。我们开发了一种替代的准Newton方法来最大程度地提高可能性,该方法稳健并迅速收敛。我们首先证明了我们的方法对fMRI数据的有用性,在该数据中,我们的模型证明了在识别受试者之间常见来源方面的敏感性提高。此外,我们的模型恢复的来源比其他方法表现出较低的会议变异性。在磁脑摄影(MEG)数据中,我们的方法在幻影数据上产生了更准确的源定位。从CAM-CAN数据集中应用了200名受试者,它揭示了传感器和源空间中诱发活性的明确序列。 该代码可在https://github.com/hugorichard/multiviewica上免费获得。

Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization. However, the aggregation of data coming from multiple subjects is challenging, since it requires accounting for large variability in anatomy, functional topography and stimulus response across individuals. Data modeling is especially hard for ecologically relevant conditions such as movie watching, where the experimental setup does not imply well-defined cognitive operations. We propose a novel MultiView Independent Component Analysis (ICA) model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise. Contrary to most group-ICA procedures, the likelihood of the model is available in closed form. We develop an alternate quasi-Newton method for maximizing the likelihood, which is robust and converges quickly. We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects. Moreover, the sources recovered by our model exhibit lower between-session variability than other methods.On magnetoencephalography (MEG) data, our method yields more accurate source localization on phantom data. Applied on 200 subjects from the Cam-CAN dataset it reveals a clear sequence of evoked activity in sensor and source space. The code is freely available at https://github.com/hugorichard/multiviewica.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源