The increasing diffusion of mobile devices has dramatically changed the network traffic landscape, with Traffic Classification (TC) surging into a fundamental role while facing new and unprecedented challenges. The recent and appealing adoption of Deep Learning (DL) techniques has risen as the solution overcoming the performance of ML techniques based on tedious and time-consuming handcrafted feature design. Still, the black-box nature of DL models prevents its practical and trustful adoption in critical scenarios where the reliability/interpretation of results/policies is of key importance. To cope with these limitations, eXplainable Artificial Intelligence (XAI) techniques have recently acquired the interest of the community. Accordingly, in this work we investigate trustworthiness and interpretability via XAI-based techniques to understand, interpret and improve the behavior of state-of-the-art multimodal DL traffic classifiers. The proposed methodology, as opposed to common results seen in XAI, attempts to provide global interpretation, rather than sample-based ones. Results, based on an open dataset, allow to complement the above findings with domain knowledge.
XAI Meets Mobile Traffic Classification: Understanding and Improving Multimodal Deep Learning Architectures / Nascita, A.; Montieri, A.; Aceto, G.; Ciuonzo, D.; Persico, V.; Pescape, A.. - In: IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT. - ISSN 1932-4537. - 18:4(2021), pp. 4225-4246. [10.1109/TNSM.2021.3098157]
XAI Meets Mobile Traffic Classification: Understanding and Improving Multimodal Deep Learning Architectures
Nascita A.;Montieri A.;Aceto G.;Ciuonzo D.;Persico V.;Pescape A.
2021
Abstract
The increasing diffusion of mobile devices has dramatically changed the network traffic landscape, with Traffic Classification (TC) surging into a fundamental role while facing new and unprecedented challenges. The recent and appealing adoption of Deep Learning (DL) techniques has risen as the solution overcoming the performance of ML techniques based on tedious and time-consuming handcrafted feature design. Still, the black-box nature of DL models prevents its practical and trustful adoption in critical scenarios where the reliability/interpretation of results/policies is of key importance. To cope with these limitations, eXplainable Artificial Intelligence (XAI) techniques have recently acquired the interest of the community. Accordingly, in this work we investigate trustworthiness and interpretability via XAI-based techniques to understand, interpret and improve the behavior of state-of-the-art multimodal DL traffic classifiers. The proposed methodology, as opposed to common results seen in XAI, attempts to provide global interpretation, rather than sample-based ones. Results, based on an open dataset, allow to complement the above findings with domain knowledge.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.