TY - GEN
T1 - Reinforcement Learning-Based Trust Dynamics Prediction Model for Teleoperated Human-Robot Interaction
AU - Garcia Cardenas, Juan Jose
AU - Tapus, Adriana
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - Trust plays a crucial role in user performance during teleoperated human-robot interaction. This study presents a reinforcement learning (RL) model that adapts to dynamic trust levels using physiological data and task performance metrics. Participants completed a complex teleoperation task under three conditions: (C1) limited feedback, (C2) AI-generated verbal guidance, and (C3) AI guidance paired with real-time RViz visualization. Physiological indicators, such as blink rate, galvanic skin response (GSR), and facial temperature along with task performance metrics like success rate and completion time were tracked. Statistical analyses revealed that increased task complexity in C1 reduced trust and increased cognitive load, leading to poorer performance. AI-generated guidance in C2 improved task understanding and performance, supporting Hypothesis H2. In C3, combining AI guidance with RViz visualization further boosted trust and reduced cognitive load, partially confirming Hypothesis H3. The RL model successfully adapted guidance strategies based on real-time user states, and additional testing showed that the agent's adaptive strategies significantly increased user trust and improved performance. These results underscore the potential of adaptive RL models to enhance trust and efficiency in teleoperated human-robot systems.
AB - Trust plays a crucial role in user performance during teleoperated human-robot interaction. This study presents a reinforcement learning (RL) model that adapts to dynamic trust levels using physiological data and task performance metrics. Participants completed a complex teleoperation task under three conditions: (C1) limited feedback, (C2) AI-generated verbal guidance, and (C3) AI guidance paired with real-time RViz visualization. Physiological indicators, such as blink rate, galvanic skin response (GSR), and facial temperature along with task performance metrics like success rate and completion time were tracked. Statistical analyses revealed that increased task complexity in C1 reduced trust and increased cognitive load, leading to poorer performance. AI-generated guidance in C2 improved task understanding and performance, supporting Hypothesis H2. In C3, combining AI guidance with RViz visualization further boosted trust and reduced cognitive load, partially confirming Hypothesis H3. The RL model successfully adapted guidance strategies based on real-time user states, and additional testing showed that the agent's adaptive strategies significantly increased user trust and improved performance. These results underscore the potential of adaptive RL models to enhance trust and efficiency in teleoperated human-robot systems.
UR - https://www.scopus.com/pages/publications/105024541932
U2 - 10.1109/RO-MAN63969.2025.11217787
DO - 10.1109/RO-MAN63969.2025.11217787
M3 - Conference contribution
AN - SCOPUS:105024541932
T3 - IEEE International Workshop on Robot and Human Communication, RO-MAN
SP - 1617
EP - 1624
BT - 2025 34th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2025
PB - IEEE Computer Society
T2 - 34th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2025
Y2 - 25 August 2025 through 29 August 2025
ER -