论文标题
Fownet-pet:无监督的学习以在PET成像中进行呼吸运动校正
FlowNet-PET: Unsupervised Learning to Perform Respiratory Motion Correction in PET Imaging
论文作者
论文摘要
为了纠正PET成像中的呼吸运动,构建了一种可解释和无监督的深度学习技术。对网络进行了训练,以预测来自不同呼吸幅度范围的两个宠物框架之间的光流。受过训练的模型将不同的回顾性宠物图像对齐,提供了最终图像,其计数统计数据与非门控图像相似,但没有模糊的效果。 Flownet-Pet应用于拟人化数字幻影数据,该数据提供了设计强大指标以量化校正的可能性。将预测的光流与地面真相进行比较时,发现中值绝对误差小于像素和切片宽度。通过与没有运动的图像进行比较并计算肿瘤的联合(IOU)的相交,以及在应用校正之前和之后的NO动作肿瘤体积内的封闭活性和变异系数(COV)来说明改进。网络提供的平均相对改进分别为IOU,总活动和COV的64%,89%和75%。 Fownet-Pet获得了与常规回顾相结合方法相似的结果,但仅需要扫描持续时间的六分之一。代码和数据已公开可用(https://github.com/teaghan/flownet_pet)。
To correct for respiratory motion in PET imaging, an interpretable and unsupervised deep learning technique, FlowNet-PET, was constructed. The network was trained to predict the optical flow between two PET frames from different breathing amplitude ranges. The trained model aligns different retrospectively-gated PET images, providing a final image with similar counting statistics as a non-gated image, but without the blurring effects. FlowNet-PET was applied to anthropomorphic digital phantom data, which provided the possibility to design robust metrics to quantify the corrections. When comparing the predicted optical flows to the ground truths, the median absolute error was found to be smaller than the pixel and slice widths. The improvements were illustrated by comparing against images without motion and computing the intersection over union (IoU) of the tumors as well as the enclosed activity and coefficient of variation (CoV) within the no-motion tumor volume before and after the corrections were applied. The average relative improvements provided by the network were 64%, 89%, and 75% for the IoU, total activity, and CoV, respectively. FlowNet-PET achieved similar results as the conventional retrospective phase binning approach, but only required one sixth of the scan duration. The code and data have been made publicly available (https://github.com/teaghan/FlowNet_PET).