TY - JOUR
T1 - Should robots display what they hear? Mishearing as a practical accomplishment
AU - Rudaz, Damien
AU - Licoppe, Christian
N1 - Publisher Copyright:
Copyright © 2025 Rudaz and Licoppe.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - As a contribution to research on transparency and failures in human–robot interaction (HRI), our study investigates whether the informational ecology configured by publicly displaying a robot’s automatic speech recognition (ASR) results is consequential in how miscommunications emerge and are dealt with. After a preliminary quantitative analysis of our participants’ gaze behavior during an experiment where they interacted with a conversational robot, we rely on a micro-analytic approach to detail how the interpretation of this robot’s conduct as inadequate was configured by what it displayed as having “heard” on its tablet. We investigate cases where an utterance or gesture by the robot was treated by participants as sequentially relevant only as long as they had not read the automatic speech recognition transcript but re-evaluated it as troublesome once they had read it. In doing so, we contribute to HRI by showing that systematically displaying an ASR transcript can play a crucial role in participants’ interpretation of a co-constructed action (such as shaking hands with a robot) as having “failed”. We demonstrate that “mistakes” and “errors” can be approached as practical accomplishments that emerge as such over the course of interaction rather than as social or technical phenomena pre-categorized by the researcher in reference to criteria exogenous to the activity being analyzed. In the end, while narrowing down on two video fragments, we find that this peculiar informational ecology did not merely impact how the robot was responded to. Instead, it modified the very definition of “mutual understanding” that was enacted and oriented to as relevant by the human participants in these fragments. Besides social robots, we caution that systematically providing such transcripts is a design decision not to be taken lightly; depending on the setting, it may have unintended consequences on interactions between humans and any form of conversational interface.
AB - As a contribution to research on transparency and failures in human–robot interaction (HRI), our study investigates whether the informational ecology configured by publicly displaying a robot’s automatic speech recognition (ASR) results is consequential in how miscommunications emerge and are dealt with. After a preliminary quantitative analysis of our participants’ gaze behavior during an experiment where they interacted with a conversational robot, we rely on a micro-analytic approach to detail how the interpretation of this robot’s conduct as inadequate was configured by what it displayed as having “heard” on its tablet. We investigate cases where an utterance or gesture by the robot was treated by participants as sequentially relevant only as long as they had not read the automatic speech recognition transcript but re-evaluated it as troublesome once they had read it. In doing so, we contribute to HRI by showing that systematically displaying an ASR transcript can play a crucial role in participants’ interpretation of a co-constructed action (such as shaking hands with a robot) as having “failed”. We demonstrate that “mistakes” and “errors” can be approached as practical accomplishments that emerge as such over the course of interaction rather than as social or technical phenomena pre-categorized by the researcher in reference to criteria exogenous to the activity being analyzed. In the end, while narrowing down on two video fragments, we find that this peculiar informational ecology did not merely impact how the robot was responded to. Instead, it modified the very definition of “mutual understanding” that was enacted and oriented to as relevant by the human participants in these fragments. Besides social robots, we caution that systematically providing such transcripts is a design decision not to be taken lightly; depending on the setting, it may have unintended consequences on interactions between humans and any form of conversational interface.
KW - action ascription
KW - automatic speech recognition
KW - conversation analysis
KW - errors and mistakes
KW - ethnomethodology
KW - mishearing
KW - repair
KW - transparency
UR - https://www.scopus.com/pages/publications/105018837082
U2 - 10.3389/frobt.2025.1597276
DO - 10.3389/frobt.2025.1597276
M3 - Article
AN - SCOPUS:105018837082
SN - 2296-9144
VL - 12
JO - Frontiers in Robotics and AI
JF - Frontiers in Robotics and AI
M1 - 1597276
ER -