Verifying Probabilistic Specifications with Functional Lagrangians

Abstract

We propose a general framework for verifying input-output properties of neural networks using \emph{functional Lagrange multipliers}. We show that this framework can be used to verify arbitrary probabilistic specifications and recovers as special cases prior work. We derive theoretical properties of the framework showing that it leads to provably tight verification when a sufficiently flexible class of functional multipliers is chosen. We demonstrate that by judicious choice of the class of functional multipliers, the framework can accommodate any desired tradeoff between tightness and complexity. We further show that the framework can handle bayesian neural networks with Gaussian posterior approximations, MC-dropout networks and probabilistic inputs, generalizing prior work on neural network verification in probabilistic settings.

Publications