Socially Assistive Robots (SARs) are rising as promising tools for promoting healthy lifestyle habits. To achieve such a goal, it is necessary that they are able to perform trustworthy and legible behaviors. In this work, we propose a cognitive architecture that integrates multimodal perception, symbolic reasoning, memory-enhanced decision-making, and adaptive interaction strategies to create an explainable and engaging dietary assistant. The key idea is to provide the robot with the capability to iteratively interact with a user and adapt the dietary plan based on their current state, preferences, and food restrictions, while conveying explicitly the inner decision and thought process. To achieve this, we employ a graph-enhanced Large Language Model (LLM), which queries contextual, semantic, and episodic acquired knowledge to generate personalized meal recommendations. These must subsequently be refined through a verification process that enforces constraints such as caloric limits and ingredient intolerances, ensuring dietary adherence. To have a transparent decision-verification process, the robot has to progressively verbalize the reasoning process while providing justifications for the recommendations to also enhance the user's trust. Non-verbal context-relevant movements are also generated to allow the robot to express empathy. We expect our framework to increase user trust, engagement, and adherence to healthy behaviors, allowing SARs to function as credible and effective health assistants.
Towards Trustworthy and Explainable Socially Assistive Robots: A Cognitive Architecture for Dietary Guidance / D'Arco, L.; Raggioli, L.; Randazzo, G.; De Gasperis, G.; Chella, A.; Costantini, S.; Rossi, S.. - (2025), pp. 1-6. ( 2025 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2025 ita 2025) [10.1109/SIMPAR62925.2025.10979036].
Towards Trustworthy and Explainable Socially Assistive Robots: A Cognitive Architecture for Dietary Guidance
D'Arco L.
;Raggioli L.;Costantini S.;Rossi S.
2025
Abstract
Socially Assistive Robots (SARs) are rising as promising tools for promoting healthy lifestyle habits. To achieve such a goal, it is necessary that they are able to perform trustworthy and legible behaviors. In this work, we propose a cognitive architecture that integrates multimodal perception, symbolic reasoning, memory-enhanced decision-making, and adaptive interaction strategies to create an explainable and engaging dietary assistant. The key idea is to provide the robot with the capability to iteratively interact with a user and adapt the dietary plan based on their current state, preferences, and food restrictions, while conveying explicitly the inner decision and thought process. To achieve this, we employ a graph-enhanced Large Language Model (LLM), which queries contextual, semantic, and episodic acquired knowledge to generate personalized meal recommendations. These must subsequently be refined through a verification process that enforces constraints such as caloric limits and ingredient intolerances, ensuring dietary adherence. To have a transparent decision-verification process, the robot has to progressively verbalize the reasoning process while providing justifications for the recommendations to also enhance the user's trust. Non-verbal context-relevant movements are also generated to allow the robot to express empathy. We expect our framework to increase user trust, engagement, and adherence to healthy behaviors, allowing SARs to function as credible and effective health assistants.| File | Dimensione | Formato | |
|---|---|---|---|
|
SIMPAR25___ADVISOR_Architecture.pdf
accesso aperto
Tipologia:
Documento in Post-print
Licenza:
Copyright dell'editore
Dimensione
184.73 kB
Formato
Adobe PDF
|
184.73 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


