论文标题

深层神经网络的分割和生成磁共振图像

Segmentation and Generation of Magnetic Resonance Images by Deep Neural Networks

论文作者

Delplace, Antoine

论文摘要

磁共振图像(MRI)在医学领域极高地用于检测和更好地了解疾病。为了固定扫描的自动处理并增强医学研究,该项目着重于自动分割MRI的目标部分,并从随机噪声中生成新的MRI数据集。更具体地说,一种称为U-NET的深神网络结构用于分割膝盖MRI的骨骼和软骨,并比较几个生成的对抗网络(GAN),并调谐以创建新的现实和高质量的大脑MRIS,可用于更先进的模型。描述了三个主要体系结构:深卷积gan(DCGAN),超级分辨率残留gan(Srresgan)和渐进式gan(Progan),以及五个损失函数:原始损失,LSGAN,WGAN,WGAN,WGAN_GP和DRAGAN。此外,通过使用主成分分析的评估措施,对定量基准进行了定量基准。结果表明,U-NET可以在分割骨骼和膝盖MRI的软骨(准确性超过99.5%)中实现最先进的性能。此外,即使某些模型很难收敛,这三个gan架构也可以成功产生逼真的大脑MRI。稳定网络的主要见解是使用单方面平滑标签,在损失函数中具有梯度惩罚的正则化(例如在WGAN_GP或DRAGAN中),在歧视器中添加了MiniBatch相似性层以及较长的训练时间。

Magnetic Resonance Images (MRIs) are extremely used in the medical field to detect and better understand diseases. In order to fasten automatic processing of scans and enhance medical research, this project focuses on automatically segmenting targeted parts of MRIs and generating new MRI datasets from random noise. More specifically, a Deep Neural Network architecture called U-net is used to segment bones and cartilages of Knee MRIs, and several Generative Adversarial Networks (GANs) are compared and tuned to create new realistic and high quality brain MRIs that can be used as training set for more advanced models. Three main architectures are described: Deep Convolution GAN (DCGAN), Super Resolution Residual GAN (SRResGAN) and Progressive GAN (ProGAN), and five loss functions are tested: the Original loss, LSGAN, WGAN, WGAN_GP and DRAGAN. Moreover, a quantitative benchmark is carried out thanks to evaluation measures using Principal Component Analysis. The results show that U-net can achieve state-of-the-art performance in segmenting bones and cartilages in Knee MRIs (Accuracy of more than 99.5%). Moreover, the three GAN architectures can successfully generate realistic brain MRIs even if some models have difficulties to converge. The main insights to stabilize the networks are using one-sided smoothing labels, regularization with gradient penalty in the loss function (like in WGAN_GP or DRAGAN), adding a minibatch similarity layer in the Discriminator and a long training time.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源