Track Introduction by Daniel Neider and Taylor T. Johnson: Track C1: Safety Verification of Deep Neural Networks (DNNs)

Formal verification of neural networks and broader machine learning models is a burgeoning field the past few years, with continued increasing interest given the ongoing growth and applicability of these data-driven methods. This track will focus on methods for formal verification of machine learning models, including neural networks, but also beyond to other model types across application domains. Papers and presentations including, but not limited to, describing methodologies, software frameworks, technical approaches, case studies, are all welcome contributions.

In addition, benchmarks are critical for evaluating scalability and broader progress within formal methods. Most recent benchmarks used for evaluation of neural network verification methods, as well as broader machine learning verification, have focused mostly on computer vision problems, specifically local robustness to adversarial perturbations of image classifiers. However, neural networks and machine learning models are being used across a variety of safety and security critical domains, and domain-specific benchmarks—both in terms of the machine learning models and their specifications—are necessary to identify limitations and directions for improvements, as well as to evaluate and ensure applicability of these methods in these domains. For instance, controllers in autonomous systems are increasingly created with data-driven methods and malware classifiers in security are often neural networks, each of which domain has its own specificities, as do countless other applications in cyber-physical systems, finance, science, and beyond.

This track will focus on collecting and publishing benchmarks — both models and specifications — across domains, from computer vision, finance, security, and other domains where neural network and machine learning formal verification is being considered. The track will also aim to collect benchmarks for future iterations of the International Verification of Neural Networks Competition (VNN-COMP) and the International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP) category on Artificial Intelligence and Neural Network Control Systems (AINNCS), as well as input for the Verification of Neural Networks standard (VNN-LIB). The event will award prizes for best benchmarks, specifications, etc.

Program 🔗

Submission 🔗

Please submit your contributions via EquinOCS

Track Organizers 🔗

NameInstitution
Daniel NeiderTU Dortmund, DE
Taylor T. JohnsonVanderbilt University, US