Comparing a Mentalist and an Interactionist Approach for Trust Analysis in Human-Robot Interaction

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Trust is an important aspect of a human-robot interaction (HRI) as it mitigates the performance of many activities. Users' trust may be impacted when robots make mistakes. To be able to properly time trust-reparation actions, robots should detect trust variations during the interaction. There are very few computational models of trust for such a task. The existing ones relied on either Psychological or Sociological theories that gave place to different definitions and analysis tools. We can distinguish two main approaches in the trust literature: the mentalist and the interactionist one. In this paper, we compare both approaches for trust detection, and explore how the adoption of two different assessment tools on an HRI dataset may lead to different results. We identify criteria that set them apart, and provide guidelines on the possibilities that each approach offers depending on the target computational model of trust.

Original languageEnglish
Title of host publicationHAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction
PublisherAssociation for Computing Machinery
Pages273-280
Number of pages8
ISBN (Electronic)9798400708244
DOIs
Publication statusPublished - 4 Dec 2023
Event11th Conference on Human-Agent Interaction, HAI 2023 - Gothenburg, Sweden
Duration: 4 Dec 202311 Dec 2023

Publication series

NameACM International Conference Proceeding Series

Conference

Conference11th Conference on Human-Agent Interaction, HAI 2023
Country/TerritorySweden
CityGothenburg
Period4/12/2311/12/23

Keywords

  • HRI
  • Interactional Sociology
  • Methodologies
  • Psychology
  • Trust

Fingerprint

Dive into the research topics of 'Comparing a Mentalist and an Interactionist Approach for Trust Analysis in Human-Robot Interaction'. Together they form a unique fingerprint.

Cite this