Open access
Date
2020Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code such as finding and fixing bugs, code completion, decompilation, type inference and many others. However, the issue of adversarial robustness of models for code has gone largely unnoticed. In this work, we explore this issue by: (i) instantiating adversarial attacks for code (a domain with discrete and highly structured inputs), (ii) showing that, similar to other domains, neural models for code are vulnerable to adversarial attacks, and (iii) combining existing and novel techniques to improve robustness while preserving high accuracy. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000466229Publication status
publishedExternal links
Book title
Proceedings of the 37th International Conference on Machine LearningJournal / series
Proceedings of Machine Learning ResearchVolume
Pages / Article No.
Publisher
PLMREvent
Organisational unit
03948 - Vechev, Martin / Vechev, Martin
Funding
680358 - Learning from Big Code: Probabilistic Models, Analysis and Synthesis (EC)
Related publications and datasets
Is cited by: https://doi.org/10.3929/ethz-b-000498126
Notes
Conference lecture held on July 14, 2020. Due to the Coronavirus (COVID-19) the conference was conducted virtually.More
Show all metadata
ETH Bibliography
yes
Altmetrics