In the Machine Learning (ML) literature, a well-known problem is the Dataset Shift problem where, differently from the ML standard hypothesis, the data in the training and test sets can follow different probability distributions leading ML systems toward poor generalisation performances. Therefore, such systems can be unreliable and risky, particularly when used in safety-critical domains. This problem is intensely felt in the Brain-Computer Interface (BCI) context, where bio-signals as Electroencephalographic (EEG) are used. In fact, EEG signals are highly non-stationary signals both over time and between different subjects. Despite several efforts in developing BCI systems to deal with different acquisition times or subjects, performance in many BCI applications remains low. Exploiting the knowledge from eXplainable Artificial Intelligence (XAI) methods can help develop EEG-based AI approaches, overcoming the performance returned by the current ones. The proposed framework will give greater robustness and reliability to BCI systems with respect to the current state of the art, alleviating the dataset shift problem and allowing a BCI system to be used by different subjects at different times without the need for further calibration/training stages.
XAI approach for addressing the dataset shift problem: BCI as a case study / Apicella, A.; Isgro, F.; Prevete, R.. - 3319:(2022), pp. 83-88. (Intervento presentato al convegno 1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming, BEWARE 2022 tenutosi a ita nel 2022).
XAI approach for addressing the dataset shift problem: BCI as a case study
Apicella A.
;Isgro F.;Prevete R.
2022
Abstract
In the Machine Learning (ML) literature, a well-known problem is the Dataset Shift problem where, differently from the ML standard hypothesis, the data in the training and test sets can follow different probability distributions leading ML systems toward poor generalisation performances. Therefore, such systems can be unreliable and risky, particularly when used in safety-critical domains. This problem is intensely felt in the Brain-Computer Interface (BCI) context, where bio-signals as Electroencephalographic (EEG) are used. In fact, EEG signals are highly non-stationary signals both over time and between different subjects. Despite several efforts in developing BCI systems to deal with different acquisition times or subjects, performance in many BCI applications remains low. Exploiting the knowledge from eXplainable Artificial Intelligence (XAI) methods can help develop EEG-based AI approaches, overcoming the performance returned by the current ones. The proposed framework will give greater robustness and reliability to BCI systems with respect to the current state of the art, alleviating the dataset shift problem and allowing a BCI system to be used by different subjects at different times without the need for further calibration/training stages.File | Dimensione | Formato | |
---|---|---|---|
paper8.pdf
accesso aperto
Tipologia:
Versione Editoriale (PDF)
Licenza:
Creative commons
Dimensione
767.15 kB
Formato
Adobe PDF
|
767.15 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.