AI-based code generators have become pivotal in assisting developers in writing software starting from natural language (NL). However, they are trained on large amounts of data, often collected from unsanitized online sources (e.g., GitHub, HuggingFace). As a consequence, AI models become an easy target for data poisoning, i.e., an attack that injects malicious samples into the training data to generate vulnerable code. To address this threat, this work investigates the security of AI code generators by devising a targeted data poisoning strategy. We poison the training data by injecting increasing amounts of code containing security vulnerabilities and assess the attack's success on different state-of-the-art models for code generation. Our study shows that AI code generators are vulnerable to even a small amount of poison. Notably, the attack success strongly depends on the model architecture and poisoning rate, whereas it is not influenced by the type of vulnerabilities. Moreover, since the attack does not impact the correctness of code generated by pretrained models, it is hard to detect. Lastly, our work offers practical insights into understanding and potentially mitigating this threat.

Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks / Cotroneo, D.; Improta, C.; Liguori, P.; Natella, R.. - 21:(2024), pp. 280-292. (Intervento presentato al convegno 32nd IEEE/ACM International Conference on Program Comprehension, ICPC 2024 tenutosi a prt nel 2024) [10.1145/3643916.3644416].

Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks

Cotroneo D.;Improta C.;Liguori P.;Natella R.
2024

Abstract

AI-based code generators have become pivotal in assisting developers in writing software starting from natural language (NL). However, they are trained on large amounts of data, often collected from unsanitized online sources (e.g., GitHub, HuggingFace). As a consequence, AI models become an easy target for data poisoning, i.e., an attack that injects malicious samples into the training data to generate vulnerable code. To address this threat, this work investigates the security of AI code generators by devising a targeted data poisoning strategy. We poison the training data by injecting increasing amounts of code containing security vulnerabilities and assess the attack's success on different state-of-the-art models for code generation. Our study shows that AI code generators are vulnerable to even a small amount of poison. Notably, the attack success strongly depends on the model architecture and poisoning rate, whereas it is not influenced by the type of vulnerabilities. Moreover, since the attack does not impact the correctness of code generated by pretrained models, it is hard to detect. Lastly, our work offers practical insights into understanding and potentially mitigating this threat.
2024
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks / Cotroneo, D.; Improta, C.; Liguori, P.; Natella, R.. - 21:(2024), pp. 280-292. (Intervento presentato al convegno 32nd IEEE/ACM International Conference on Program Comprehension, ICPC 2024 tenutosi a prt nel 2024) [10.1145/3643916.3644416].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/972387
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact