With the advent of the deep learning era, Fingerprint-based Authentication Systems (FAS) equipped with Fingerprint Presentation Attack Detection (FPAD) modules managed to avoid attacks on the sensor through artificial replicas of fingerprints. Previous works highlighted the vulnerability of FPADs to digital adversarial attacks. However, in a realistic scenario, the attackers may not have the possibility to directly feed a digitally perturbed image to the deep learning based FPAD, since the channel between the sensor and the FPAD is usually protected. In this paper we thus investigate the threat level associated with adversarial attacks against FPADs in the physical domain. By materially realising fakes from the adversarial images we were able to insert them into the system directly from the “exposed” part, the sensor. To the best of our knowledge, this represents the first proof-of-concept of a fingerprint adversarial presentation attack. We evaluated how much liveness score changed by feeding the system with the attacks using digital and printed adversarial images. To measure what portion of this increase is due to the printing itself, we also re-printed the original spoof images, without injecting any perturbation. Experiments conducted on the LivDet 2015 dataset demonstrate that the printed adversarial images achieve ∼ 100% attack success rate against an FPAD if the attacker has the ability to make multiple attacks on the sensor (10) and a fairly good result (∼ 28%) in a one-shot scenario. Despite this work must be considered as a proof-of-concept, it constitutes a promising pioneering attempt confirming that an adversarial presentation attack is feasible and dangerous.

Fingerprint Adversarial Presentation Attack in the Physical Domain / Marrone, S.; Casula, R.; Orru, G.; Marcialis, G. L.; Sansone, C.. - 12666:(2021), pp. 530-543. (Intervento presentato al convegno 25th International Conference on Pattern Recognition Workshops, ICPR 2020 nel 2021) [10.1007/978-3-030-68780-9_42].

Fingerprint Adversarial Presentation Attack in the Physical Domain

Marrone S.;Sansone C.
2021

Abstract

With the advent of the deep learning era, Fingerprint-based Authentication Systems (FAS) equipped with Fingerprint Presentation Attack Detection (FPAD) modules managed to avoid attacks on the sensor through artificial replicas of fingerprints. Previous works highlighted the vulnerability of FPADs to digital adversarial attacks. However, in a realistic scenario, the attackers may not have the possibility to directly feed a digitally perturbed image to the deep learning based FPAD, since the channel between the sensor and the FPAD is usually protected. In this paper we thus investigate the threat level associated with adversarial attacks against FPADs in the physical domain. By materially realising fakes from the adversarial images we were able to insert them into the system directly from the “exposed” part, the sensor. To the best of our knowledge, this represents the first proof-of-concept of a fingerprint adversarial presentation attack. We evaluated how much liveness score changed by feeding the system with the attacks using digital and printed adversarial images. To measure what portion of this increase is due to the printing itself, we also re-printed the original spoof images, without injecting any perturbation. Experiments conducted on the LivDet 2015 dataset demonstrate that the printed adversarial images achieve ∼ 100% attack success rate against an FPAD if the attacker has the ability to make multiple attacks on the sensor (10) and a fairly good result (∼ 28%) in a one-shot scenario. Despite this work must be considered as a proof-of-concept, it constitutes a promising pioneering attempt confirming that an adversarial presentation attack is feasible and dangerous.
2021
978-3-030-68779-3
978-3-030-68780-9
Fingerprint Adversarial Presentation Attack in the Physical Domain / Marrone, S.; Casula, R.; Orru, G.; Marcialis, G. L.; Sansone, C.. - 12666:(2021), pp. 530-543. (Intervento presentato al convegno 25th International Conference on Pattern Recognition Workshops, ICPR 2020 nel 2021) [10.1007/978-3-030-68780-9_42].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/863539
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? ND
social impact