Deception is a complex phenomenon that is deeply intertwined with human social interactions. Deception between humans often manifests through verbal, vocal, and visible behaviors. While it often has unethical implications, it can also serve valuable social functions, such as maintaining relationships, protecting emotions, or managing difficult situations. For this reason, it is also being investigated in robotic applications. In this work, we propose a framework for implementing specific robot behaviors through non-strategic cues of deception (known as leakage), during deceptive communication with humans. We dwell on the ethical dimensions of robotic deception by acknowledging the implication of possible physical or psychological harm in sensitive contexts, such as healthcare and assistive robotics. We propose to mitigate possible drops in trust towards robots using more transparent and human-like deceptive behaviors, by equipping robots with seemingly unintentional behaviors that betray deception. To this extent, we propose a low-risk educational scenario where a robot interacts with students in a problem-solving game to test deception leaking's effects on students' perceptions of the robot, perceived human-likeness, intentionality, engagement, and trust in the robot.

RoboLeaks: Non-strategic Cues for Leaking Deception in Social Robots / Esposito, Raffaella; Rossi, Alessandra; Ponticorvo, Michela; Rossi, Silvia. - (2025), pp. 1111-1120. ( ACM/IEEE International Conference on Human-Robot Interaction) [10.1109/HRI61500.2025.10974221].

RoboLeaks: Non-strategic Cues for Leaking Deception in Social Robots

Raffaella Esposito
Primo
Writing – Original Draft Preparation
;
Alessandra Rossi
Writing – Review & Editing
;
Michela Ponticorvo
Membro del Collaboration Group
;
Silvia Rossi
Writing – Review & Editing
2025

Abstract

Deception is a complex phenomenon that is deeply intertwined with human social interactions. Deception between humans often manifests through verbal, vocal, and visible behaviors. While it often has unethical implications, it can also serve valuable social functions, such as maintaining relationships, protecting emotions, or managing difficult situations. For this reason, it is also being investigated in robotic applications. In this work, we propose a framework for implementing specific robot behaviors through non-strategic cues of deception (known as leakage), during deceptive communication with humans. We dwell on the ethical dimensions of robotic deception by acknowledging the implication of possible physical or psychological harm in sensitive contexts, such as healthcare and assistive robotics. We propose to mitigate possible drops in trust towards robots using more transparent and human-like deceptive behaviors, by equipping robots with seemingly unintentional behaviors that betray deception. To this extent, we propose a low-risk educational scenario where a robot interacts with students in a problem-solving game to test deception leaking's effects on students' perceptions of the robot, perceived human-likeness, intentionality, engagement, and trust in the robot.
2025
RoboLeaks: Non-strategic Cues for Leaking Deception in Social Robots / Esposito, Raffaella; Rossi, Alessandra; Ponticorvo, Michela; Rossi, Silvia. - (2025), pp. 1111-1120. ( ACM/IEEE International Conference on Human-Robot Interaction) [10.1109/HRI61500.2025.10974221].
File in questo prodotto:
File Dimensione Formato  
3721488.3721632.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Dominio pubblico
Dimensione 3.06 MB
Formato Adobe PDF
3.06 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/997013
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact