论文标题
关于医学图像分类的可解释人工智能方法的分析
Analysis of Explainable Artificial Intelligence Methods on Medical Image Classification
论文作者
论文摘要
在计算机视觉任务(例如图像分类)中使用深度学习导致此类系统的性能迅速提高。由于这些系统的实用性这一大幅增长,在许多关键任务中使用人工智能。在医疗领域,由于其在许多任务中与人类医生的高度准确性和几乎均等,因此正在采用医学图像分类系统。但是,由于难以解释这些模型的预测,这些人工智能系统非常复杂,科学家被科学家视为黑匣子。当这些系统用于协助高风险决策时,能够理解,验证和证明模型得出的结论是非常重要的。用于洞悉黑盒模型的研究技术在可解释的人工智能(XAI)领域。在本文中,我们评估了两种不同的XAI方法,跨越了从组织病理学图像对肺癌进行分类的两个卷积神经网络模型。我们可以看到输出并分析了这些方法的性能,以便更好地了解如何在医疗领域应用可解释的人工智能。
The use of deep learning in computer vision tasks such as image classification has led to a rapid increase in the performance of such systems. Due to this substantial increment in the utility of these systems, the use of artificial intelligence in many critical tasks has exploded. In the medical domain, medical image classification systems are being adopted due to their high accuracy and near parity with human physicians in many tasks. However, these artificial intelligence systems are extremely complex and are considered black boxes by scientists, due to the difficulty in interpreting what exactly led to the predictions made by these models. When these systems are being used to assist high-stakes decision-making, it is extremely important to be able to understand, verify and justify the conclusions reached by the model. The research techniques being used to gain insight into the black-box models are in the field of explainable artificial intelligence (XAI). In this paper, we evaluated three different XAI methods across two convolutional neural network models trained to classify lung cancer from histopathological images. We visualized the outputs and analyzed the performance of these methods, in order to better understand how to apply explainable artificial intelligence in the medical domain.