论文标题
Xalgo:通过提问解释算法的内部状态的设计探针
XAlgo: a Design Probe of Explaining Algorithms' Internal States via Question-Answering
论文作者
论文摘要
算法通常以非专家用户的形式显示为“黑匣子”。虽然先前的工作着重于可解释的表示和以专家为导向的探索,但我们提出并研究了一种使用问题答案的交互式方法,以向需要了解算法的内部状态的非专家用户解释确定性算法(例如,学生学习算法,操作员,操作人员,监控机器人,监视eShoteNting网络网络路线)。我们构建Xalgo - 一种正式的模型,首先根据分类法对问题类型进行分类,并基于一组规则生成答案,该规则从算法内部状态(例如伪码)中提取信息。在算法学习方案中,有18位参与者的设计探测器(9名Xalgo向导和9个作为对照组)根据人们提出的问题,Xalgo的反应效果以及对脑子用户的理解算法的挑战的挑战报告的发现和设计含义。
Algorithms often appear as 'black boxes' to non-expert users. While prior work focuses on explainable representations and expert-oriented exploration, we propose and study an interactive approach using question answering to explain deterministic algorithms to non-expert users who need to understand the algorithms' internal states (e.g., students learning algorithms, operators monitoring robots, admins troubleshooting network routing). We construct XAlgo -- a formal model that first classifies the type of question based on a taxonomy and generates an answer based on a set of rules that extract information from representations of an algorithm's internal states, e.g., the pseudocode. A design probe in an algorithm learning scenario with 18 participants (9 for a Wizard-of-Oz XAlgo and 9 as a control group) reports findings and design implications based on what kinds of questions people ask, how well XAlgo responds, and what remain as challenges to bridge users' gulf of understanding algorithms.