Reducing DNN Properties to Enable Falsification with Adversarial Attacks

Authors:

David Shriver, Sebastian Elbaum, Matthew B. Dwyer

Publication Date:

27 May 2021

Venue:

International Conference on Software Engineering (ICSE)

Abstract:

Deep Neural Networks (DNN) are increasingly being deployed in safety-critical domains, from autonomous vehicles to medical devices, where the consequences of errors demand techniques that can provide stronger guarantees about behavior than just high test accuracy. This paper explores broadening the application of existing adversarial attack techniques for the falsification of DNN safety properties. We contend and later show that such attacks provide a powerful repertoire of scalable algorithms for property falsification. To enable the broad application of falsification, we introduce a semantics-preserving reduction of multiple safety property types, which subsume prior work, into a set of equivalid correctness problems amenable to adversarial attacks. We evaluate our reduction approach as an enabler of falsification on a range of DNN correctness problems and show its cost-effectiveness and scalability.

Downloads:

[Paper] [Artifact] [Tool] [Video]