Trust is a key element in the creation of positive bonds between individuals, and this concept extends to relationships between humans and Artificial Intelligence (AI). However, trust in AI must be placed with care, as, like human relationships, misplaced trust can lead to negative consequences. In recent years, the empathy developed by humans towards AI has been shown to have positive effects on well-being. Increasingly accurate Natural Language Processing models (NPL), for example, have made it possible to use chatbots to support the psychological well-being of users, addressing problems such as depression, anxiety, and social isolation. This approach has enabled many people to access psychological support services and reduce associated prejudice. Trust plays a significant role in the growing popularity of virtual assistants such as ChatGPT, which, despite not having the ability to generate genuine trust or empathy, seems to establish an empathetic bond with users due to the quality of its responses. However, trust in AI systems should not lead one to underestimate the risks and limitations of such technologies. Natural language processing models are susceptible to bias and can lead to social consequences that are difficult to manage. It is crucial to make users aware of the limitations of AI and human responsibility in ethical decisions. This article emphasizes the need for serious reflection on the consequences of evolving trust bonds with AI. After analyzing the limitations and biases of natural language processing models, attention is drawn to the growing social awareness of ethics in the use of AI, both for users and practitioners.
Trust in NLP Models: Ethical Considerations on the Reckless Use of AI / Marassi, Lidia; Patwardhan, Narendra; Marrone, Stefano; Sansone, Carlo. - 428:(2025), pp. 357-363. [10.1007/978-981-96-0994-9_33]
Trust in NLP Models: Ethical Considerations on the Reckless Use of AI
Marassi, Lidia;Patwardhan, Narendra;Marrone, Stefano;Sansone, Carlo
2025
Abstract
Trust is a key element in the creation of positive bonds between individuals, and this concept extends to relationships between humans and Artificial Intelligence (AI). However, trust in AI must be placed with care, as, like human relationships, misplaced trust can lead to negative consequences. In recent years, the empathy developed by humans towards AI has been shown to have positive effects on well-being. Increasingly accurate Natural Language Processing models (NPL), for example, have made it possible to use chatbots to support the psychological well-being of users, addressing problems such as depression, anxiety, and social isolation. This approach has enabled many people to access psychological support services and reduce associated prejudice. Trust plays a significant role in the growing popularity of virtual assistants such as ChatGPT, which, despite not having the ability to generate genuine trust or empathy, seems to establish an empathetic bond with users due to the quality of its responses. However, trust in AI systems should not lead one to underestimate the risks and limitations of such technologies. Natural language processing models are susceptible to bias and can lead to social consequences that are difficult to manage. It is crucial to make users aware of the limitations of AI and human responsibility in ethical decisions. This article emphasizes the need for serious reflection on the consequences of evolving trust bonds with AI. After analyzing the limitations and biases of natural language processing models, attention is drawn to the growing social awareness of ethics in the use of AI, both for users and practitioners.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


