论文标题

MR图像重建在现实扰动下的对抗性鲁棒性

Adversarial Robustness of MR Image Reconstruction under Realistic Perturbations

论文作者

Morshuis, Jan Nikolas, Gatidis, Sergios, Hein, Matthias, Baumgartner, Christian F.

论文摘要

深度学习(DL)方法已显示出有希望的结果,用于解决不适合的逆问题,例如从无效的$ k $ - 空间数据中的MR图像重建。但是,这些方法目前尚无重建质量的保证,并且这种算法的可靠性仅被了解得多。对抗攻击提供了一种有价值的工具,可以了解可能的故障模式和基于DL的重建算法的最坏情况。在本文中,我们描述了对多圈$ K $空间测量结果的对抗性攻击,并在最近提出的E2E-VARNET和更简单的基于UNET的模型上对其进行评估。与先前的工作相反,这些攻击旨在特异性改变诊断相关的区域。使用两个逼真的攻击模型(对抗性$ K $ - 空间噪声和对抗性旋转),我们能够证明,当前基于DL的最新DL重建算法确实对这种扰动确实敏感到相关诊断信息可能会丢失的程度。令人惊讶的是,在我们的实验中,UNET和更复杂的E2E-VARNET对此类攻击同样敏感。我们的发现进一步增加了必须谨慎行事的证据,因为基于DL的方法更接近临床实践。

Deep Learning (DL) methods have shown promising results for solving ill-posed inverse problems such as MR image reconstruction from undersampled $k$-space data. However, these approaches currently have no guarantees for reconstruction quality and the reliability of such algorithms is only poorly understood. Adversarial attacks offer a valuable tool to understand possible failure modes and worst case performance of DL-based reconstruction algorithms. In this paper we describe adversarial attacks on multi-coil $k$-space measurements and evaluate them on the recently proposed E2E-VarNet and a simpler UNet-based model. In contrast to prior work, the attacks are targeted to specifically alter diagnostically relevant regions. Using two realistic attack models (adversarial $k$-space noise and adversarial rotations) we are able to show that current state-of-the-art DL-based reconstruction algorithms are indeed sensitive to such perturbations to a degree where relevant diagnostic information may be lost. Surprisingly, in our experiments the UNet and the more sophisticated E2E-VarNet were similarly sensitive to such attacks. Our findings add further to the evidence that caution must be exercised as DL-based methods move closer to clinical practice.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源