Patients with COVID-19 experience severe respiratory and vocal dif ficulties as well as symptoms that give rise to unique audio characteristics in their voices. The present study takes advantage of vocal biomarkers extracted from cough, speech and breathing audio recordings obtained through personal smart phones from both SARS-CoV-2 virus-infected persons and non-infected partici pants accessing two different healthcare facilities. The results provide findings on the use of acoustic feature sets taken from low-level feature representations for COVID-19 recognition from cough, breath, and speech patterns. Machine learning models were trained on datasets from individual vocal exercises and on a dataset from combined exercises (cough, breath, and speech). The classification models provided up to 68.6% accuracy, 86.7% sensitivity, and 66% specificity, whose values and most significant features vary according to the type of vocal pattern examined and the type of model adopted, indicating that audio characteristics may be used to detect COVID-19 symptoms and that the combined use of mul tiple audio patterns from different vocal tasks can achieve the most encouraging results in terms of classification performance.

Enabling COVID-19 Detection from Multiple Audio Recordings: A Preliminary Comparison Between Cough, Breath, and Speech Signals / Ponsiglione, Alfonso Maria; Angelone, Francesca; Sparaco, Rossella; Piccolo, Salvatore; Parrish, Amy; Calcagno, Andrea; Fournier, Guillaume; de Brito Martins, Ayana; Cordella, Fulvio; Arienzo, Arianna; Castella, Lorenzo; Vitale, VINCENZO NORMAN; Amato, Francesco; Romano, Maria. - (2024), pp. 373-383. (Intervento presentato al convegno EMBEC 2024 tenutosi a Portoroz- SLOVENIA nel 9-13 giugno 2024).

Enabling COVID-19 Detection from Multiple Audio Recordings: A Preliminary Comparison Between Cough, Breath, and Speech Signals

Alfonso Maria Ponsiglione;Francesca Angelone;Rossella Sparaco;Salvatore Piccolo;Vincenzo Norman Vitale;Francesco Amato;Maria Romano
2024

Abstract

Patients with COVID-19 experience severe respiratory and vocal dif ficulties as well as symptoms that give rise to unique audio characteristics in their voices. The present study takes advantage of vocal biomarkers extracted from cough, speech and breathing audio recordings obtained through personal smart phones from both SARS-CoV-2 virus-infected persons and non-infected partici pants accessing two different healthcare facilities. The results provide findings on the use of acoustic feature sets taken from low-level feature representations for COVID-19 recognition from cough, breath, and speech patterns. Machine learning models were trained on datasets from individual vocal exercises and on a dataset from combined exercises (cough, breath, and speech). The classification models provided up to 68.6% accuracy, 86.7% sensitivity, and 66% specificity, whose values and most significant features vary according to the type of vocal pattern examined and the type of model adopted, indicating that audio characteristics may be used to detect COVID-19 symptoms and that the combined use of mul tiple audio patterns from different vocal tasks can achieve the most encouraging results in terms of classification performance.
2024
Enabling COVID-19 Detection from Multiple Audio Recordings: A Preliminary Comparison Between Cough, Breath, and Speech Signals / Ponsiglione, Alfonso Maria; Angelone, Francesca; Sparaco, Rossella; Piccolo, Salvatore; Parrish, Amy; Calcagno, Andrea; Fournier, Guillaume; de Brito Martins, Ayana; Cordella, Fulvio; Arienzo, Arianna; Castella, Lorenzo; Vitale, VINCENZO NORMAN; Amato, Francesco; Romano, Maria. - (2024), pp. 373-383. (Intervento presentato al convegno EMBEC 2024 tenutosi a Portoroz- SLOVENIA nel 9-13 giugno 2024).
File in questo prodotto:
File Dimensione Formato  
voice.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: Accesso privato/ristretto
Dimensione 943.81 kB
Formato Adobe PDF
943.81 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/961918
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact