论文标题
fistusq-net:回归质量评估深度学习算法的底面图像质量分级
FundusQ-Net: a Regression Quality Assessment Deep Learning Algorithm for Fundus Images Quality Grading
论文作者
论文摘要
目的:眼科病理学,例如青光眼,糖尿病性视网膜病和与年龄相关的黄斑变性是失明和视力障碍的主要原因。需要新颖的决策支持工具,可以简化和加快这些病理的诊断。此过程中的关键步骤是自动估计眼底图像的质量,以确保人类操作员或机器学习模型可以解释。我们提出了一种新颖的眼睛图像质量量表和深度学习(DL)模型,该模型可以估算相对于这个新量表的底底图像质量。 方法:两位眼科医生在1-10范围内总共将1,245张图像分级为质量,分辨率为0.5。 DL回归模型接受了底底图像质量评估的培训。所使用的架构是Inception-V3。该模型是使用来自6个数据库的总共89,947张图像开发的,其中1,245个由专家标记,其余88,702张图像用于预训练和半措辞学习。在内部测试集(n = 209)以及外部测试集(n = 194)上评估了最终的DL模型。 结果:最终的DL模型(表示为Fundusq-net)在内部测试集上达到了平均绝对误差为0.61(0.54-0.68)。当在公共DRIMDB数据库中评估为二进制分类模型时,作为外部测试设置,该模型的准确性为99%。 意义:拟议的算法为眼底图像的自动化质量分级提供了一种新的强大工具。
Objective: Ophthalmological pathologies such as glaucoma, diabetic retinopathy and age-related macular degeneration are major causes of blindness and vision impairment. There is a need for novel decision support tools that can simplify and speed up the diagnosis of these pathologies. A key step in this process is to automatically estimate the quality of the fundus images to make sure these are interpretable by a human operator or a machine learning model. We present a novel fundus image quality scale and deep learning (DL) model that can estimate fundus image quality relative to this new scale. Methods: A total of 1,245 images were graded for quality by two ophthalmologists within the range 1-10, with a resolution of 0.5. A DL regression model was trained for fundus image quality assessment. The architecture used was Inception-V3. The model was developed using a total of 89,947 images from 6 databases, of which 1,245 were labeled by the specialists and the remaining 88,702 images were used for pre-training and semi-supervised learning. The final DL model was evaluated on an internal test set (n=209) as well as an external test set (n=194). Results: The final DL model, denoted FundusQ-Net, achieved a mean absolute error of 0.61 (0.54-0.68) on the internal test set. When evaluated as a binary classification model on the public DRIMDB database as an external test set the model obtained an accuracy of 99%. Significance: the proposed algorithm provides a new robust tool for automated quality grading of fundus images.