Convolutional Neural Networks (CNNs) have achieved superhuman performance in computer vision tasks. However, these networks are becoming both increasingly complex and resource-intensive, and are susceptible to adversarial attacks. On one hand, to counter complexity and resource-related limitations, various techniques such as Quantization and Approximate Computing (AxC) have been proposed to reduce the complexity and power consumption of CNNs, respectively. On the other hand, various techniques have been proposed to craft more precise and stronger adversarial attacks, as well as new methodologies to defend against them. Nevertheless, the relationship between the efficiency and security of CNNs is not adequately addressed. Therefore, this article examines the potential for detecting adversarial attacks against CNNs through image transformation, in the context of quantized and approximate neural networks. The experimental results indicate that image-transformation techniques are not effective in detecting adversarial samples against quantized and approximated CNNs, despite their success in detecting such samples against floating-point CNNs.

Ineffectiveness of Digital Transformations for Detecting Adversarial Attacks Against Quantized and Approximate CNNs / Barone, S.; Casola, V.; Della Torca, S.. - (2024), pp. 290-295. (Intervento presentato al convegno 2024 IEEE International Conference on Cyber Security and Resilience, CSR 2024 tenutosi a gbr nel 2024) [10.1109/CSR61664.2024.10679345].

Ineffectiveness of Digital Transformations for Detecting Adversarial Attacks Against Quantized and Approximate CNNs

Barone S.;Casola V.;Della Torca S.
2024

Abstract

Convolutional Neural Networks (CNNs) have achieved superhuman performance in computer vision tasks. However, these networks are becoming both increasingly complex and resource-intensive, and are susceptible to adversarial attacks. On one hand, to counter complexity and resource-related limitations, various techniques such as Quantization and Approximate Computing (AxC) have been proposed to reduce the complexity and power consumption of CNNs, respectively. On the other hand, various techniques have been proposed to craft more precise and stronger adversarial attacks, as well as new methodologies to defend against them. Nevertheless, the relationship between the efficiency and security of CNNs is not adequately addressed. Therefore, this article examines the potential for detecting adversarial attacks against CNNs through image transformation, in the context of quantized and approximate neural networks. The experimental results indicate that image-transformation techniques are not effective in detecting adversarial samples against quantized and approximated CNNs, despite their success in detecting such samples against floating-point CNNs.
2024
Ineffectiveness of Digital Transformations for Detecting Adversarial Attacks Against Quantized and Approximate CNNs / Barone, S.; Casola, V.; Della Torca, S.. - (2024), pp. 290-295. (Intervento presentato al convegno 2024 IEEE International Conference on Cyber Security and Resilience, CSR 2024 tenutosi a gbr nel 2024) [10.1109/CSR61664.2024.10679345].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/990037
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact