Metadata only
Datum
2021-05-28Typ
- Conference Paper
ETH Bibliographie
yes
Altmetrics
Abstract
Recent work has exposed the vulnerability of computer vision models to vector field attacks. Due to the widespread usage of such models in safety-critical applications, it is crucial to quantify their robustness against such spatial transformations. However, existing work only provides empirical robustness quantification against vector field deformations via adversarial attacks, which lack provable guarantees. In this work, we propose novel convex relaxations, enabling us, for the first time, to provide a certificate of robustness against vector field transformations. Our relaxations are model-agnostic and can be leveraged by a wide range of neural network verifiers. Experiments on various network architectures and different datasets demonstrate the effectiveness and scalability of our method. Mehr anzeigen
Publikationsstatus
publishedExterne Links
Zeitschrift / Serie
Proceedings of the AAAI Conference on Artificial IntelligenceBand
Seiten / Artikelnummer
Verlag
AAAIKonferenz
Thema
Adversarial attacks & robustnessOrganisationseinheit
03948 - Vechev, Martin / Vechev, Martin
ETH Bibliographie
yes
Altmetrics