Metadata only
Date
2021-05-28Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Recent work has exposed the vulnerability of computer vision models to vector field attacks. Due to the widespread usage of such models in safety-critical applications, it is crucial to quantify their robustness against such spatial transformations. However, existing work only provides empirical robustness quantification against vector field deformations via adversarial attacks, which lack provable guarantees. In this work, we propose novel convex relaxations, enabling us, for the first time, to provide a certificate of robustness against vector field transformations. Our relaxations are model-agnostic and can be leveraged by a wide range of neural network verifiers. Experiments on various network architectures and different datasets demonstrate the effectiveness and scalability of our method. Show more
Publication status
publishedExternal links
Journal / series
Proceedings of the AAAI Conference on Artificial IntelligenceVolume
Pages / Article No.
Publisher
AAAIEvent
Subject
Adversarial attacks & robustnessOrganisational unit
03948 - Vechev, Martin / Vechev, Martin
More
Show all metadata
ETH Bibliography
yes
Altmetrics