论文标题

迈向以人为本的可解释AI:模型解释的用户研究调查

Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations

论文作者

Rong, Yao, Leemann, Tobias, Nguyen, Thai-trang, Fiedler, Lisa, Qian, Peizhu, Unhelkar, Vaibhav, Seidel, Tina, Kasneci, Gjergji, Kasneci, Enkelejda

论文摘要

可解释的AI(XAI)被广泛视为正弦质量,不断扩展的AI研究。更好地理解XAI用户的需求,以及以人为解释模型为中心的评估既是必要的,也是挑战。在本文中,我们探讨了HCI和AI研究人员如何根据系统文献综述在XAI应用程序中进行用户研究。在过去五年中,通过基于人类的XAI评估确定并彻底分析了97篇论文之后,我们将它们按照解释方法的测得的特征进行了分类,即信任,理解,可用性,可用性和人类-AI协作绩效。我们的研究表明,XAI在某些应用程序领域(例如推荐系统)比在其他应用程序领域中更快地传播,但是用户评估仍然相当稀疏,几乎没有任何认知或社会科学的见解。基于对最佳实践的全面讨论,即通用模型,设计选择和用户研究中的措施,我们建议针对XAI研究人员和从业者设计和进行用户研究的实用指南。最后,这项调查还凸显了几个开放研究方向,尤其是将心理科学和以人为本的XAI联系起来。

Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源