Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users’ Information Processing
It is shown that explanations pave the way for AI systems to reshape users' understanding of the world around them, and that the indiscriminate use of modern explainability methods as an isolated measure to address AI systems' black-box problems can lead to unintended, unforeseen problems.
Although future regulations increasingly advocate that AI applications must be interpretable by users, we know little about how such explainability can affect human information processing. By conducting two experimental studies, we help to fill this gap. We show that explanations pave the way for AI systems to reshape users' understanding of the world around them. Specifically, state-of-the-art explainability methods evoke mental model adjustments that are subject to confirmation bias, allowing misconceptions and mental errors to persist and even accumulate. Moreover, mental model adjustments create spillover effects that alter users' behavior in related but distinct domains where they do not have access to an AI system. These spillover effects of mental model adjustments risk manipulating user behavior, promoting discriminatory biases, and biasing decision making. The reported findings serve as a warning that the indiscriminate use of modern explainability methods as an isolated measure to address AI systems' black-box problems can lead to unintended, unforeseen problems because it creates a new channel through which AI systems can influence human behavior in various domains.
Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence
Ekaterina Jussupow, Kai Spohrer, Armin Heinzl, Joshua Gawlitza · Information Systems Research
Bots with Feelings: Should AI Agents Express Positive Emotion in Customer Service?
Elizabeth Han, Dezhi Yin, Han Zhang · Information Systems Research
The Janus Effect of Generative AI: Charting the Path for Responsible Conduct of Scholarly Activities in Information Systems
Anjana Susarla, Ram D. Gopal, Jason Bennett Thatcher, Suprateek Sarker · Information Systems Research
Getting Personal: A Deep Learning Artifact for Text-Based Measurement of Personality
Kai Yang, Raymond Y.K. Lau, Ahmed Abbasi · Information Systems Research
Estimating the Impact of “Humanizing” Customer Service Chatbots
Scott Schanke, Gordon Burtch, Gautam Ray · Information Systems Research
Cognitive Challenges in Human–Artificial Intelligence Collaboration: Investigating the Path Toward Productive Delegation
Andreas Fügener, Jörn Grahl, Alok Gupta, Wolfgang Ketter · Information Systems Research
Human–Robot Interaction: When Investors Adjust the Usage of Robo-Advisors in Peer-to-Peer Lending
Ruyi Ge, Zhiqiang Zheng, Xuan Tian, Li Liao · Information Systems Research
Crowds, Lending, Machine, and Bias
Runshan Fu, Yan Huang, Param Vir Singh · Information Systems Research