论文标题
模型创世纪
Models Genesis
论文作者
论文摘要
从自然图像到医学图像的转移学习已被确定为医学图像分析深度学习中最实用的范例之一。然而,为了适应这种范式,必须在2D中重新制定和解决最突出的成像方式(例如CT和MRI)中的3D成像任务,从而失去了丰富的3D解剖信息,从而不可避免地会损害其性能。为了克服这一局限性,我们建立了一组称为通用自动辅助模型的模型,昵称为模型Genesis,因为它们是由nihilo创建的(没有手动标记),自学成才(自学知名度)和通用(用作生成应用程序特定目标模型的源模型)。我们的广泛实验表明,在所有五个目标3D应用程序中,我们的模型的起源从头开始明显优于从头开始的学习和现有的预训练的3D模型,涵盖了分割和分类。 More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging.这种表现归因于我们统一的自学学习框架,建立在一个简单而强大的观察基础上:医学图像中复杂而复发的解剖结构可以作为强大而免费的监督信号,以自动通过自我审议自动学习常见的解剖学表示。作为开放科学,所有代码和预训练的模型均可在https://github.com/mrgiovanni/modelsgenesis中获得。
Transfer learning from natural images to medical images has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.