论文标题
评估AI模型与医疗保健专业人员之间的沟通差距:对AI驱动的临床决策的解释性,实用性和信任
Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making
论文作者
论文摘要
本文为可解释的机器学习(ML)模型的务实评估框架做出了贡献。该研究揭示了ML解释模型的作用更为细微,当这些模型务实地嵌入临床背景下时。尽管医疗保健专业人员(HCP)对解释作为一种安全和信任机制的普遍积极态度,但对于大量参与者而言,与确认偏见相关的负面影响,强调了过度依赖的模型以及与模型相互作用的努力。同样,与其主要预期功能之一相矛盾,标准解释模型显示出有限的能力支持对模型局限性的批判性理解。但是,我们发现了新的重大积极影响,重新定位了在临床背景下解释的作用:其中包括减少自动化偏见,解决含糊的临床案例(HCP对其决定不确定的情况)以及支持不太经验的HCP在新领域知识中的支持。
This paper contributes with a pragmatic evaluation framework for explainable Machine Learning (ML) models for clinical decision support. The study revealed a more nuanced role for ML explanation models, when these are pragmatically embedded in the clinical context. Despite the general positive attitude of healthcare professionals (HCPs) towards explanations as a safety and trust mechanism, for a significant set of participants there were negative effects associated with confirmation bias, accentuating model over-reliance and increased effort to interact with the model. Also, contradicting one of its main intended functions, standard explanatory models showed limited ability to support a critical understanding of the limitations of the model. However, we found new significant positive effects which repositions the role of explanations within a clinical context: these include reduction of automation bias, addressing ambiguous clinical cases (cases where HCPs were not certain about their decision) and support of less experienced HCPs in the acquisition of new domain knowledge.