SPLASH 2022
Mon 5 - Sat 10 December 2022 Auckland, New Zealand
Sat 10 Dec 2022 11:30 - 12:00 at Lecture Theatre 2 - Proofs Chair(s): Atsushi Igarashi

Developers of machine learning applications often apply post-training neural network optimizations, such as quantization and pruning, that approximate a neural network to speed up inference and reduce energy consumption, while maintaining high accuracy and robustness.

Despite a recent surge in techniques for the robustness verification of neural networks, a major limitation of almost all state-of-the-art approaches is that the verification needs to be run from scratch every time the network is even slightly modified. Running precise end-to-end verification from scratch for every new network is expensive and impractical in many scenarios that use or compare multiple approximate network versions, and the robustness of all the networks needs to be verified efficiently.

We present FANC, the first general technique for transferring proofs between a given network and its multiple approximate versions without compromising verifier precision. To reuse the proofs obtained when verifying the original network, FANC generates a set of templates – connected symbolic shapes at intermediate layers of the original network – that capture the proof of the property to be verified. We present novel algorithms for generating and transforming templates that generalize to a broad range of approximate networks and reduce the verification cost.

We present a comprehensive evaluation demonstrating the effectiveness of our approach. We consider a diverse set of networks obtained by applying popular approximation techniques such as quantization and pruning on fully-connected and convolutional architectures and verify their robustness against different adversarial attacks such as adversarial patches, L0, rotation and brightening. Our results indicate that FANC can significantly speed up verification with state-of-the-art verifier, DeepZ by up to 4.1x.

Sat 10 Dec

Displayed time zone: Auckland, Wellington change

10:30 - 12:00
ProofsOOPSLA at Lecture Theatre 2
Chair(s): Atsushi Igarashi Kyoto University
10:30
30m
Talk
Data-Driven Lemma Synthesis for Interactive Proofs
OOPSLA
Aishwarya Sivaraman University of California at Los Angeles, Alex Sanchez-Stern University of Massachusetts at Amherst, Bretton Chen University of California at Los Angeles, Sorin Lerner University of California at San Diego, Todd Millstein University of California at Los Angeles
DOI
11:00
30m
Talk
Intrinsically-Typed Definitional Interpreters à la Carte
OOPSLA
Cas van der Rest Delft University of Technology, Casper Bach Poulsen Delft University of Technology, Arjen Rouvoet Delft University of Technology, Eelco Visser Delft University of Technology, Peter D. Mosses Swansea University and Delft University of Technology
DOI
11:30
30m
Research paper
Proof transfer for fast certification of multiple approximate neural networks
OOPSLA
Shubham Ugare University of Illinois at Urbana-Champaign, Gagandeep Singh University of Illinois at Urbana-Champaign, Sasa Misailovic University of Illinois at Urbana-Champaign
DOI