skip to main content
10.1007/978-3-030-65474-0_13guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Probabilistic Lipschitz Analysis of Neural Networks

Published: 18 November 2020 Publication History

Abstract

We are interested in algorithmically proving the robustness of neural networks. Notions of robustness have been discussed in the literature; we are interested in probabilistic notions of robustness that assume it feasible to construct a statistical model of the process generating the inputs of a neural network. We find this a reasonable assumption given the rapid advances in algorithms for learning generative models of data. A neural network f is then defined to be probabilistically robust if, for a randomly generated pair of inputs, f is likely to demonstrate k-Lipschitzness, i.e., the distance between the outputs computed by f is upper-bounded by the kth multiple of the distance between the pair of inputs. We name this property, probabilistic Lipschitzness.
We model generative models and neural networks, together, as programs in a simple, first-order, imperative, probabilistic programming language, pcat. Inspired by a large body of existing literature, we define a denotational semantics for this language. Then we develop a sound local Lipschitzness analysis for cat, a non-probabilistic sublanguage of pcat. This analysis can compute an upper bound of the “Lipschitzness” of a neural network in a bounded region of the input set. We next present a provably correct algorithm, PROLIP, that analyzes the behavior of a neural network in a user-specified box-shaped input region and computes - (i) lower bounds on the probabilistic mass of such a region with respect to the generative model, (ii) upper bounds on the Lipschitz constant of the neural network in this region, with the help of the local Lipschitzness analysis. Finally, we present a sketch of a proof-search algorithm that uses PROLIP as a primitive for finding proofs of probabilistic Lipschitzness. We implement the PROLIP algorithm and empirically evaluate the computational complexity of PROLIP.

References

[1]
Albarghouthi A, D’Antoni L, Drews S, and Nori AV FairSquare: probabilistic verification of program fairness Proc. ACM Program. Lang. 2017 1 OOPSLA 80:1-80:30
[2]
Alzantot, M., Sharma, Y., Elgohary, A., Ho, B.J., Srivastava, M., Chang, K.W.: Generating natural language adversarial examples. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2890–2896. Association for Computational Linguistics, Brussels (October 2018)
[3]
Baluta, T., Shen, S., Shinde, S., Meel, K.S., Saxena, P.: Quantitative verification of neural networks and its security applications. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, pp. 1249–1264. Association for Computing Machinery, London (November 2019)
[4]
Bárány I and Füredi Z Computing the volume is difficult Discret. Comput. Geom. 1987 2 4 319-326
[5]
Barthe, G., D’Argenio, P., Rezk, T.: Secure information flow by self-composition. In: Proceedings of 17th IEEE Computer Security Foundations Workshop, 2004, pp. 100–114 (June 2004)
[6]
Barthe G, Crespo JM, and Kunz C Butler M and Schulte W Relational verification using product programs FM 2011: Formal Methods 2011 Heidelberg Springer 200-214
[7]
Barthe G, Espitau T, Ferrer Fioriti LM, and Hsu J Chaudhuri S and Farzan A Synthesizing probabilistic invariants via Doob’s decomposition Computer Aided Verification 2016 Cham Springer 43-61
[8]
Barthe G, Espitau T, Gaboardi M, Grégoire B, Hsu J, and Strub P-Y Ahmed A An assertion-based program logic for probabilistic programs Programming Languages and Systems 2018 Cham Springer 117-144
[9]
Barthe G, Espitau T, Grégoire B, Hsu J, and Strub PY Proving expected sensitivity of probabilistic programs Proc. ACM Program. Lang. 2017 2 POPL 57:1-57:29
[10]
Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 2613–2621. Curran Associates, Inc. (2016). http://papers.nips.cc/paper/6339-measuring-neural-net-robustness-with-constraints.pdf
[11]
Bastani O, Zhang X, and Solar-Lezama A Probabilistic verification of fairness properties via concentration Proc. ACM Program. Lang. 2019 3 OOPSLA 118:1-118:27
[12]
Benton, N.: Simple relational correctness proofs for static analyses and program transformations. In: Proceedings of the 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2004, pp. 14–25. Association for Computing Machinery, Venice (January 2004)
[13]
Carlini, N., et al.: Hidden voice commands. In: 25th USENIX Security Symposium (USENIX Security 16), pp. 513–530 (2016). https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/carlini
[14]
Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. In: 2018 IEEE Security and Privacy Workshops (SPW), pp. 1–7 (May 2018)
[15]
Chakarov A and Sankaranarayanan S Sharygina N and Veith H Probabilistic program analysis with martingales Computer Aided Verification 2013 Heidelberg Springer 511-526
[16]
Chaudhuri, S., Gulwani, S., Lublinerman, R., Navidpour, S.: Proving programs robust. In: Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ESEC/FSE 2011, pp. 102–112. Association for Computing Machinery, Szeged (September 2011)
[18]
Clarkson, M.R., Schneider, F.B.: Hyperproperties. In: 2008 21st IEEE Computer Security Foundations Symposium, pp. 51–65 (June 2008)
[19]
Combettes, P.L., Pesquet, J.C.: Lipschitz certificates for neural network structures driven by averaged activation operators. arXiv:1903.01014 (2019)
[20]
Cousins B and Vempala S Gaussian cooling and O(n3) algorithms for volume and Gaussian volume SIAM J. Comput. 2018 47 3 1237-1273
[21]
Cousot, P., Halbwachs, N.: Automatic discovery of linear restraints among variables of a program. In: Proceedings of the 5th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, POPL 1978, pp. 84–96. Association for Computing Machinery, Tucson (January 1978)
[22]
Cousot P and Monerau M Seidl H Probabilistic abstract interpretation Programming Languages and Systems 2012 Heidelberg Springer 169-193
[23]
Dyer ME and Frieze AM On the complexity of computing the volume of a polyhedron SIAM J. Comput. 1988 17 5 967-974
[24]
Dyer M, Frieze A, and Kannan R A random polynomial-time algorithm for approximating the volume of convex bodies J. ACM 1991 38 1 1-17
[25]
Elekes G A geometric inequality and the complexity of computing volume Discret. Comput. Geom. 1986 1 4 289-292
[26]
Fazlyab, M., Robey, A., Hassani, H., Morari, M., Pappas, G.: Efficient and accurate estimation of lipschitz constants for deep neural networks. In: Wallach, H., Larochelle, H., Beygelzimer, A., Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 11427–11438. Curran Associates, Inc. (2019). http://papers.nips.cc/paper/9319-efficient-and-accurate-estimation-of-lipschitz-constants-for-deep-neural-networks.pdf
[27]
Fischer, M., Balunovic, M., Drachsler-Cohen, D., Gehr, T., Zhang, C., Vechev, M.: DL2: training and querying neural networks with logic. In: International Conference on Machine Learning, pp. 1931–1941 (May 2019). http://proceedings.mlr.press/v97/fischer19a.html
[28]
Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 3–18 (May 2018)
[29]
Geldenhuys, J., Dwyer, M.B., Visser, W.: Probabilistic symbolic execution. In: Proceedings of the 2012 International Symposium on Software Testing and Analysis, ISSTA 2012, pp. 166–176. Association for Computing Machinery, Minneapolis (July 2012)
[30]
Ghorbal K, Goubault E, and Putot S Bouajjani A and Maler O The zonotope abstract domain Taylor1+ Computer Aided Verification 2009 Heidelberg Springer 627-633
[31]
Gibbons J Yang H APLicative programming with Naperian functors Programming Languages and Systems 2017 Heidelberg Springer 556-583
[32]
Goodfellow, I., et al.: Generative adversarial nets. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 2672–2680. Curran Associates, Inc. (2014). http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
[33]
Gouk, H., Frank, E., Pfahringer, B., Cree, M.: Regularisation of neural networks by enforcing lipschitz continuity. arXiv:1804.04368 (September 2018). http://arxiv.org/abs/1804.04368
[34]
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning, ICML 2015, vol. 37, pp. 448–456. JMLR.org, Lille (July 2015)
[35]
Jia, R., Liang, P.: Adversarial examples for evaluating reading comprehension systems. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2021–2031. Association for Computational Linguistics, Copenhagen (September 2017)
[36]
Katoen J-P, McIver AK, Meinicke LA, and Morgan CC Cousot R and Martel M Linear-invariant generation for probabilistic programs Static Analysis 2010 Heidelberg Springer 390-406
[37]
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) Computer Aided Verification, CAV 2017. Lecture Notes in Computer Science, vol. 10426. Springer, Cham (2017).
[38]
Katz G et al. Dillig I, Tasiran S, et al. The Marabou framework for verification and analysis of deep neural networks Computer Aided Verification 2019 Cham Springer 443-452
[39]
Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv:1312.6114 (May 2014). http://arxiv.org/abs/1312.6114
[40]
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates, Inc. (2012). http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
[41]
Lamport L Proving the correctness of multiprocess programs IEEE Trans. Softw. Eng. 1977 3 2 125-143
[42]
Latorre, F., Rolland, P., Cevher, V.: Lipschitz constant estimation of neural networks via sparse polynomial optimization. arXiv:2004.08688 (April 2020). http://arxiv.org/abs/2004.08688
[43]
Liu, C., Arnon, T., Lazarus, C., Barrett, C., Kochenderfer, M.J.: Algorithms for verifying deep neural networks. arXiv:1903.06758 (March 2019). http://arxiv.org/abs/1903.06758
[44]
Mangal, R., Nori, A.V., Orso, A.: Robustness of neural networks: a probabilistic and practical approach. In: Proceedings of the 41st International Conference on Software Engineering: New Ideas and Emerging Results, ICSE-NIER 2019, pp. 93–96. IEEE Press, Montreal (May 2019)
[45]
Qin, Y., Carlini, N., Cottrell, G., Goodfellow, I., Raffel, C.: Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In: International Conference on Machine Learning, pp. 5231–5240 (May 2019). http://proceedings.mlr.press/v97/qin19a.html
[46]
Sampson, A., Panchekha, P., Mytkowicz, T., McKinley, K.S., Grossman, D., Ceze, L.: Expressing and verifying probabilistic assertions. In: Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2014, pp. 112–122. Association for Computing Machinery, Edinburgh (June 2014)
[47]
Sankaranarayanan, S., Chakarov, A., Gulwani, S.: Static analysis for probabilistic programs: inferring whole program properties from finitely many paths. In: Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2013, pp. 447–458. Association for Computing Machinery, Seattle (June 2013)
[48]
[49]
Singh G, Gehr T, Püschel M, and Vechev M An abstract domain for certifying neural networks Proc. ACM Program. Lang. 2019 3 POPL 41:1-41:30
[50]
Slepak J, Shivers O, and Manolios P Shao Z An array-oriented language with static rank polymorphism Programming Languages and Systems 2014 Heidelberg Springer 27-46
[51]
Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014). http://arxiv.org/abs/1312.6199
[52]
Tsuzuku, Y., Sato, I., Sugiyama, M.: Lipschitz-margin training: scalable certification of perturbation invariance for deep neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS 2018, pp. 6542–6551. Curran Associates Inc., Montréal (December 2018)
[53]
Virmaux, A., Scaman, K.: Lipschitz regularity of deep neural networks: analysis and efficient estimation. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 3835–3844. Curran Associates, Inc. (2018). http://papers.nips.cc/paper/7640-lipschitz-regularity-of-deep-neural-networks-analysis-and-efficient-estimation.pdf
[54]
Wang, D., Hoffmann, J., Reps, T.: PMAF: an algebraic framework for static analysis of probabilistic programs. In: Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2018, pp. 513–528. Association for Computing Machinery, Philadelphia (June 2018)
[55]
Webb, S., Rainforth, T., Teh, Y.W., Kumar, M.P.: A statistical approach to assessing neural network robustness. In: International Conference on Learning Representations (September 2018). https://openreview.net/forum?id=S1xcx3C5FX
[56]
Weng, L., et al.: PROVEN: verifying robustness of neural networks with a probabilistic approach. In: International Conference on Machine Learning, pp. 6727–6736 (May 2019). http://proceedings.mlr.press/v97/weng19a.html
[57]
Weng, L., et al.: Towards fast computation of certified robustness for ReLU networks. In: International Conference on Machine Learning, pp. 5276–5285 (July 2018). http://proceedings.mlr.press/v80/weng18a.html
[58]
Weng, T.W., et al.: Evaluating the robustness of neural networks: an extreme value theory approach. In: International Conference on Learning Representations (February 2018). https://openreview.net/forum?id=BkUHlMZ0b
[59]
Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. arXiv:1712.07107 (July 2018). http://arxiv.org/abs/1712.07107
[60]
Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2528–2535 (June 2010)
[61]
Zhang, H., Zhang, P., Hsieh, C.J.: RecurJac: an efficient recursive algorithm for bounding Jacobian matrix of neural networks and its applications. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 5757–5764 (2019)
[62]
Zhang, J.M., Harman, M., Ma, L., Liu, Y.: Machine learning testing: survey, landscapes and horizons. IEEE Trans. Softw. Eng. 1 (2020)

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
Static Analysis: 27th International Symposium, SAS 2020, Virtual Event, November 18–20, 2020, Proceedings
Nov 2020
390 pages
ISBN:978-3-030-65473-3
DOI:10.1007/978-3-030-65474-0

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 18 November 2020

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media