The growing integration of generative artificial intelligence (AI) into mental health care raises critical questions for risk studies about how trust, risk perception, and professional responsibility are reconfigured in algorithmically mediated therapeutic contexts. This study examines how Italian mental health professionals negotiate and interpret the introduction of AI into psychological practice, with particular attention to the social construction of risk and the conditions under which trust in algorithmic systems is extended or withheld. Data were collected in Italy between May and July 2025 through semi-structured interviews with 14 practicing psychologists, analysed using reflexive thematic analysis. Three interconnected dimensions emerged. First, professionals actively constructed risk perception through boundary work, distinguishing between acceptable instrumental automation and threatening encroachments on clinical judgement. Second, trust towards algorithmic systems and digital platforms was negotiated selectively and conditionally, shaped by algorithmic opacity and the reorganisation of therapeutic labour within platform economies. Third, vulnerable patients emerged as a site of amplified risk, where structural inequalities in the Italian mental health care system were compounded by unsupervised reliance on low-cost AI tools. These findings suggest that risk and trust in AI-mediated mental health care cannot be addressed through technical or regulatory frameworks alone, but require collective responses attentive to the relational, epistemic, and structural conditions under which care is practiced.
Algorithmic reconfiguration of mental health care: risk, trust and vulnerability in Italian professionals’ accounts / Banfi, Giulia; Crescentini, Noemi. - In: HEALTH RISK & SOCIETY. - ISSN 1369-8575. - (2026).
Algorithmic reconfiguration of mental health care: risk, trust and vulnerability in Italian professionals’ accounts
Noemi Crescentini
2026
Abstract
The growing integration of generative artificial intelligence (AI) into mental health care raises critical questions for risk studies about how trust, risk perception, and professional responsibility are reconfigured in algorithmically mediated therapeutic contexts. This study examines how Italian mental health professionals negotiate and interpret the introduction of AI into psychological practice, with particular attention to the social construction of risk and the conditions under which trust in algorithmic systems is extended or withheld. Data were collected in Italy between May and July 2025 through semi-structured interviews with 14 practicing psychologists, analysed using reflexive thematic analysis. Three interconnected dimensions emerged. First, professionals actively constructed risk perception through boundary work, distinguishing between acceptable instrumental automation and threatening encroachments on clinical judgement. Second, trust towards algorithmic systems and digital platforms was negotiated selectively and conditionally, shaped by algorithmic opacity and the reorganisation of therapeutic labour within platform economies. Third, vulnerable patients emerged as a site of amplified risk, where structural inequalities in the Italian mental health care system were compounded by unsupervised reliance on low-cost AI tools. These findings suggest that risk and trust in AI-mediated mental health care cannot be addressed through technical or regulatory frameworks alone, but require collective responses attentive to the relational, epistemic, and structural conditions under which care is practiced.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


