The pervasive diffusion of artificial intelligence (AI) within knowledge-intensive organizations is reshaping knowledge management (KM) systems at multiple levels. From algorithmic recommendation engines to generative knowledge tools, AI is increasingly mediating how knowledge is created, shared, and operationalized (Cillo et al., 2022; Saviano et al., 2023). While these technologies offer opportunities for personalization, efficiency, and scale, they simultaneously raise concerns about transparency, accountability, and the erosion of human judgment in value-laden knowledge processes (Rohden & Zeferino, 2023; Douglas et al., 2024). As AI becomes more autonomous and embedded, organizations are facing not only integration challenges but also rising uncertainty regarding the ethical acceptability and social legitimacy of their KM infrastructures. Particularly during what we define as AI turbulence—a phase in which the deployment of AI outpaces ethical reflection and stakeholder alignment—organizations may encounter unpredictable risks, trust erosion, and resistance to knowledge technologies. In such contexts, the perception of ethical risk becomes a critical factor influencing knowledge flows, system usability, and institutional coherence (Winfield & Jirotka, 2018; Douglas et al., 2024). This work addresses the following research question: How does the perceived ethical risk associated with AI influence KM practices in organizations, and how can organizations anticipate and mitigate these risks to ensure responsible and sustainable KM innovation? This question is rooted in the recognition that AI is not a neutral computational tool, but a socio- technical agent that redistributes agency, reconfigures power asymmetries, and creates new ethical dependencies among organizational stakeholders (Blackman, 2022; Douglas et al., 2024). Understanding perceived ethical risk in this context requires going beyond normative ethical analysis to explore how individuals and organizations interpret and respond to cognitive opacity, emotional dissonance, and accountability gaps introduced by AI. These perceptions may lead knowledge actors to withdraw from AI-supported processes, challenge system legitimacy, or experience reduced knowledge engagement—ultimately threatening both KM effectiveness and ethical resilience (Rohden & Zeferino, 2023; Subaveerapandiyan et al., 2023). To address this challenge, this study builds on systems theory, stakeholder responsibility models, and hybrid intelligence paradigms, for drawing a framework aimed to (1) clarify the socio-cognitive mechanisms underlying ethical risk perception in knowledge ecosystems, and (2) support organizations in mapping risk hotspots and activating governance levers for ethical foresight (Winfield & Jirotka, 2018; Saviano et al., 2023).

Navigating Ethical Risk in Knowledge Management Practices Amid AI Turbulence: A Systems-Based Perspective / Caputo, Francesco; Cervino, Cristina; D'Amore, Raffaele; Bosco, Gerardo; Napoli, Luigi; Mazzotta, Regina. - (2025), pp. 1-10. ( IL KNOWLEDGE MANAGEMENT NELLO SVILUPPO DI UNA COMUNITÀ SCIENTIFICA GLOBALE Salerno 30 maggio).

Navigating Ethical Risk in Knowledge Management Practices Amid AI Turbulence: A Systems-Based Perspective

Francesco Caputo;Cristina Cervino
;
2025

Abstract

The pervasive diffusion of artificial intelligence (AI) within knowledge-intensive organizations is reshaping knowledge management (KM) systems at multiple levels. From algorithmic recommendation engines to generative knowledge tools, AI is increasingly mediating how knowledge is created, shared, and operationalized (Cillo et al., 2022; Saviano et al., 2023). While these technologies offer opportunities for personalization, efficiency, and scale, they simultaneously raise concerns about transparency, accountability, and the erosion of human judgment in value-laden knowledge processes (Rohden & Zeferino, 2023; Douglas et al., 2024). As AI becomes more autonomous and embedded, organizations are facing not only integration challenges but also rising uncertainty regarding the ethical acceptability and social legitimacy of their KM infrastructures. Particularly during what we define as AI turbulence—a phase in which the deployment of AI outpaces ethical reflection and stakeholder alignment—organizations may encounter unpredictable risks, trust erosion, and resistance to knowledge technologies. In such contexts, the perception of ethical risk becomes a critical factor influencing knowledge flows, system usability, and institutional coherence (Winfield & Jirotka, 2018; Douglas et al., 2024). This work addresses the following research question: How does the perceived ethical risk associated with AI influence KM practices in organizations, and how can organizations anticipate and mitigate these risks to ensure responsible and sustainable KM innovation? This question is rooted in the recognition that AI is not a neutral computational tool, but a socio- technical agent that redistributes agency, reconfigures power asymmetries, and creates new ethical dependencies among organizational stakeholders (Blackman, 2022; Douglas et al., 2024). Understanding perceived ethical risk in this context requires going beyond normative ethical analysis to explore how individuals and organizations interpret and respond to cognitive opacity, emotional dissonance, and accountability gaps introduced by AI. These perceptions may lead knowledge actors to withdraw from AI-supported processes, challenge system legitimacy, or experience reduced knowledge engagement—ultimately threatening both KM effectiveness and ethical resilience (Rohden & Zeferino, 2023; Subaveerapandiyan et al., 2023). To address this challenge, this study builds on systems theory, stakeholder responsibility models, and hybrid intelligence paradigms, for drawing a framework aimed to (1) clarify the socio-cognitive mechanisms underlying ethical risk perception in knowledge ecosystems, and (2) support organizations in mapping risk hotspots and activating governance levers for ethical foresight (Winfield & Jirotka, 2018; Saviano et al., 2023).
2025
Navigating Ethical Risk in Knowledge Management Practices Amid AI Turbulence: A Systems-Based Perspective / Caputo, Francesco; Cervino, Cristina; D'Amore, Raffaele; Bosco, Gerardo; Napoli, Luigi; Mazzotta, Regina. - (2025), pp. 1-10. ( IL KNOWLEDGE MANAGEMENT NELLO SVILUPPO DI UNA COMUNITÀ SCIENTIFICA GLOBALE Salerno 30 maggio).
File in questo prodotto:
File Dimensione Formato  
Navigating Ethical Risk in Knowledge Management Practices Amid AI Turbulence.pdf

solo utenti autorizzati

Tipologia: Documento in Pre-print
Licenza: Copyright dell'editore
Dimensione 274.75 kB
Formato Adobe PDF
274.75 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/1036859
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact