论文标题
在医学生成模型中分发不确定性和分配检测
Disentangled Uncertainty and Out of Distribution Detection in Medical Generative Models
论文作者
论文摘要
在安全关键环境(例如医疗领域)中信任深度学习模型的预测仍然不是一个可行的选择。医学成像领域中远处的不确定性量化很少受到关注。在本文中,我们研究了图像中的不确定性,以在医学领域中的图像翻译任务。我们比较了多种不确定性定量方法,即合奏,翻转,辍学和dropconnect,同时使用CycleGAN将T1加权脑MRI扫描转换为T2加权脑MRI扫描。我们进一步评估了存在分布数据(脑CT和RGB面部图像)的不确定性行为,表明认知不确定性可用于检测分布输入,这应该提高模型输出的可靠性。
Trusting the predictions of deep learning models in safety critical settings such as the medical domain is still not a viable option. Distentangled uncertainty quantification in the field of medical imaging has received little attention. In this paper, we study disentangled uncertainties in image to image translation tasks in the medical domain. We compare multiple uncertainty quantification methods, namely Ensembles, Flipout, Dropout, and DropConnect, while using CycleGAN to convert T1-weighted brain MRI scans to T2-weighted brain MRI scans. We further evaluate uncertainty behavior in the presence of out of distribution data (Brain CT and RGB Face Images), showing that epistemic uncertainty can be used to detect out of distribution inputs, which should increase reliability of model outputs.