Download PDFOpen PDF in browserCertified Private Inference on Neural Networks via Lipschitz-Guided Abstraction Refinement12 pages•Published: October 23, 2023AbstractPrivate inference on neural networks requires running all the computation on encrypted data. Unfortunately, neural networks contain a large number of non-arithmetic operations, such as ReLU activation functions and max pooling layers, which incur a high latency cost in their encrypted form. To address this issue, the majority of private inference methods re- place some or all of the non-arithmetic operations with a polynomial approximation. This step introduces approximation errors that can substantially alter the output of the neural network and decrease its predictive performance. In this paper, we propose a Lipschitz- Guided Abstraction Refinement method (LiGAR), which provides strong guarantees on the global approximation error. Our method is iterative, and leverages state-of-the-art Lipschitz constant estimation techniques to produce increasingly tighter bounds on the worst-case error. At each iteration, LiGAR designs the least expensive polynomial approx- imation by solving the dual of the corresponding optimization problem. Our preliminary experiments show that LiGAR can easily converge to the optimum on medium-sized neural networks.Keyphrases: abstract interpretation, deep neural networks, homomorphic encryption, lipschitz constant, polynomial approximation, privacy preserving machine learning In: Nina Narodytska, Guy Amir, Guy Katz and Omri Isac (editors). Proceedings of the 6th Workshop on Formal Methods for ML-Enabled Autonomous Systems, vol 16, pages 35-46.
|