Deception is a complex phenomenon that is deeply intertwined with human social interactions. Deception between humans often manifests through verbal, vocal, and visible behaviors. While it often has unethical implications, it can also serve valuable social functions, such as maintaining relationships, protecting emotions, or managing difficult situations. For this reason, it is also being investigated in robotic applications. In this work, we propose a framework for implementing specific robot behaviors through non-strategic cues of deception (known as leakage), during deceptive communication with humans. We dwell on the ethical dimensions of robotic deception by acknowledging the implication of possible physical or psychological harm in sensitive contexts, such as healthcare and assistive robotics. We propose to mitigate possible drops in trust towards robots using more transparent and human-like deceptive behaviors, by equipping robots with seemingly unintentional behaviors that betray deception. To this extent, we propose a low-risk educational scenario where a robot interacts with students in a problem-solving game to test deception leaking's effects on students' perceptions of the robot, perceived human-likeness, intentionality, engagement, and trust in the robot.
RoboLeaks: Non-strategic Cues for Leaking Deception in Social Robots / Esposito, Raffaella; Rossi, Alessandra; Ponticorvo, Michela; Rossi, Silvia. - (2025), pp. 1111-1120. ( ACM/IEEE International Conference on Human-Robot Interaction) [10.1109/HRI61500.2025.10974221].
RoboLeaks: Non-strategic Cues for Leaking Deception in Social Robots
Raffaella Esposito
Primo
Writing – Original Draft Preparation
;Alessandra RossiWriting – Review & Editing
;Michela PonticorvoMembro del Collaboration Group
;Silvia RossiWriting – Review & Editing
2025
Abstract
Deception is a complex phenomenon that is deeply intertwined with human social interactions. Deception between humans often manifests through verbal, vocal, and visible behaviors. While it often has unethical implications, it can also serve valuable social functions, such as maintaining relationships, protecting emotions, or managing difficult situations. For this reason, it is also being investigated in robotic applications. In this work, we propose a framework for implementing specific robot behaviors through non-strategic cues of deception (known as leakage), during deceptive communication with humans. We dwell on the ethical dimensions of robotic deception by acknowledging the implication of possible physical or psychological harm in sensitive contexts, such as healthcare and assistive robotics. We propose to mitigate possible drops in trust towards robots using more transparent and human-like deceptive behaviors, by equipping robots with seemingly unintentional behaviors that betray deception. To this extent, we propose a low-risk educational scenario where a robot interacts with students in a problem-solving game to test deception leaking's effects on students' perceptions of the robot, perceived human-likeness, intentionality, engagement, and trust in the robot.| File | Dimensione | Formato | |
|---|---|---|---|
|
3721488.3721632.pdf
accesso aperto
Tipologia:
Versione Editoriale (PDF)
Licenza:
Dominio pubblico
Dimensione
3.06 MB
Formato
Adobe PDF
|
3.06 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


