skip to main content
10.1145/3691620.3695303acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

Learning DNN Abstractions using Gradient Descent

Published: 27 October 2024 Publication History

Abstract

Deep Neural Networks (DNNs) are being trained and trusted for performing fairly complex tasks, even in business- and safety-critical applications. This necessitates that they be formally analyzed before deployment. Scalability of such analyses is a major bottleneck in their widespread use. There has been a lot of work on abstraction, and counterexample-guided abstraction refinement (CEGAR) of DNNs to address the scalability issue. However, these abstraction-refinement techniques explore only a subset of possible abstractions, and may miss an optimal abstraction. In particular, the refinement updates the abstract DNN based only on local information derived from the spurious counterexample in each iteration. The lack of a global view may result in a series of bad refinement choices, limiting the search to a region of sub-optimal abstractions. We propose a novel technique that parameterizes the construction of the abstract network in terms of continuous real-valued parameters. This allows us to use gradient descent to search through the space of possible abstractions, and ensures that the search never gets restricted to sub-optimal abstractions. Moreover, our parameterization can express more general abstractions than the existing techniques, enabling us to discover better abstractions than previously possible.

References

[1]
Ashok, P., Hashemi, V., Kretínský, J., and Mohr, S. Deepabstract: Neural network abstraction for accelerating verification. In Automated Technology for Verification and Analysis - 18th International Symposium, ATVA 2020, Hanoi, Vietnam, October 19--23, 2020, Proceedings (2020), D. V. Hung and O. Sokolsky, Eds., vol. 12302 of Lecture Notes in Computer Science, Springer, pp. 92--107.
[2]
Balunovic, M., Baader, M., Singh, G., Gehr, T., and Vechev, M. T. Certifying geometric robustness of neural networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8--14, 2019, Vancouver, BC, Canada (2019), H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. B. Fox, and R. Garnett, Eds., pp. 15287--15297.
[3]
Bassan, S., and Katz, G. Towards formal XAI: formally approximate minimal explanations of neural networks. In Tools and Algorithms for the Construction and Analysis of Systems - 29th International Conference, TACAS 2023, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Paris, France, April 22--27, 2023, Proceedings, Part I (2023), S. Sankaranarayanan and N. Sharygina, Eds., vol. 13993 of Lecture Notes in Computer Science, Springer, pp. 187--207.
[4]
Bojarski, M., Testa, D. D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L. D., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., and Zieba, K. End to end learning for self-driving cars. CoRR abs/1604.07316 (2016).
[5]
Bunel, R., Lu, J., Turkaslan, I., Torr, P. H. S., Kohli, P., and Kumar, M. P. Branch and bound for piecewise linear neural network verification. J. Mach. Learn. Res. 21 (2020), 42:1--42:39.
[6]
Carlini, N., Katz, G., Barrett, C., and Dill, D. L. Provably minimally-distorted adversarial examples, 2018.
[7]
Carlini, N., and Wagner, D. A. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22--26, 2017 (2017), IEEE Computer Society, pp. 39--57.
[8]
Chau, C., Kretínský, J., and Mohr, S. Syntactic vs semantic linear abstraction and refinement of neural networks. In Automated Technology for Verification and Analysis - 21st International Symposium, ATVA 2023, Singapore, October 24--27, 2023, Proceedings, Part I (2023), É. André and J. Sun, Eds., vol. 14215 of Lecture Notes in Computer Science, Springer, pp. 401--421.
[9]
Chen, X., Liu, C., Li, B., Lu, K., and Song, D. Targeted backdoor attacks on deep learning systems using data poisoning. CoRR abs/1712.05526 (2017).
[10]
Cheng, Y., Wang, D., Zhou, P., and Zhang, T. A survey of model compression and acceleration for deep neural networks. CoRR abs/1710.09282 (2017).
[11]
Clarke, E. M., Grumberg, O., Jha, S., Lu, Y., and Veith, H. Counterexample-guided abstraction refinement for symbolic model checking. J. ACM 50, 5 (2003), 752--794.
[12]
Cohen, E., Elboher, Y. Y., Barrett, C. W., and Katz, G. Tighter abstract queries in neural network verification. In LPAR 2023: Proceedings of 24th International Conference on Logic for Programming, Artificial Intelligence and Reasoning, Manizales, Colombia, 4--9th June 2023 (2023), R. Piskac and A. Voronkov, Eds., vol. 94 of EPiC Series in Computing, EasyChair, pp. 124--143.
[13]
Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine 29, 6 (2012), 141--142.
[14]
Djavanshir, G. R., Chen, X., and Yang, W. A review of artificial intelligence's neural networks (deep learning) applications in medical diagnosis and prediction. IT Professional 23, 3 (2021), 58--62.
[15]
Duong, H., Nguyen, T., and Dwyer, M. A dpll(t) framework for verifying deep neural networks, 2024.
[16]
Elboher, Y. Y., Gottschlich, J., and Katz, G. An abstraction-based framework for neural network verification. In Computer Aided Verification - 32nd International Conference, CAV 2020, Los Angeles, CA, USA, July 21--24, 2020, Proceedings, Part I (2020), S. K. Lahiri and C. Wang, Eds., vol. 12224 of Lecture Notes in Computer Science, Springer, pp. 43--65.
[17]
Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7--9, 2015, Conference Track Proceedings (2015), Y. Bengio and Y. LeCun, Eds.
[18]
Katz, G., Barrett, C. W., Dill, D. L., Julian, K., and Kochenderfer, M. J. Reluplex: An efficient SMT solver for verifying deep neural networks. In Computer Aided Verification - 29th International Conference, CAV 2017, Heidelberg, Germany, July 24--28, 2017, Proceedings, Part I (2017), R. Majumdar and V. Kuncak, Eds., vol. 10426 of Lecture Notes in Computer Science, Springer, pp. 97--117.
[19]
Kurakin, A., Goodfellow, I. J., and Bengio, S. Adversarial examples in the physical world. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24--26, 2017, Workshop Track Proceedings (2017), OpenReview.net.
[20]
Mangal, A., Kalia, S., Rajgopal, H., Rangarajan, K., Namboodiri, V. P., Banerjee, S., and Arora, C. Covidaid: COVID-19 detection using chest x-ray. CoRR abs/2004.09803 (2020).
[21]
Marqes-Silva, J., and Ignatiev, A. Delivering trustworthy AI through formal XAI. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022 (2022), AAAI Press, pp. 12342--12350.
[22]
Moosavi-Dezfooli, S., Fawzi, A., and Frossard, P. Deepfool: A simple and accurate method to fool deep neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27--30, 2016 (2016), IEEE Computer Society, pp. 2574--2582.
[23]
Müller, C., Serre, F., Singh, G., Püschel, M., and Vechev, M. T. Scaling polyhedral neural network verification on gpus. In Proceedings of Machine Learning and Systems 2021, MLSys 2021, virtual, April 5--9, 2021 (2021), A. Smola, A. Dimakis, and I. Stoica, Eds., mlsys.org.
[24]
Müller, M. N., Makarchuk, G., Singh, G., Püschel, M., and Vechev, M. T. PRIMA: general and precise neural network certification via scalable convex hull approximations. Proc. ACM Program. Lang. 6, POPL (2022), 1--33.
[25]
Ostrovsky, M., Barrett, C. W., and Katz, G. An abstraction-refinement approach to verifying convolutional neural networks. In Automated Technology for Verification and Analysis - 20th International Symposium, ATVA 2022, Virtual Event, October 25--28, 2022, Proceedings (2022), A. Bouajjani, L. Holík, and Z. Wu, Eds., vol. 13505 of Lecture Notes in Computer Science, Springer, pp. 391--396.
[26]
Palma, A. D., Bunel, R., Desmaison, A., Dvijotham, K., Kohli, P., Torr, P. H. S., and Kumar, M. P. Improved branch and bound for neural network verification via lagrangian decomposition. CoRR abs/2104.06718 (2021).
[27]
Ruoss, A., Baader, M., Balunovic, M., and Vechev, M. T. Efficient certification of spatial robustness. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2--9, 2021 (2021), AAAI Press, pp. 2504--2513.
[28]
Salman, H., Yang, G., Zhang, H., Hsieh, C.-J., and Zhang, P. A convex relaxation barrier to tight robustness verification of neural networks. Advances in Neural Information Processing Systems 32 (2019), 9835--9846.
[29]
Shi, Z., Jin, Q., Kolter, J. Z., Jana, S., Hsieh, C.-J., and Zhang, H. Formal verification for neural networks with general nonlinearities via branch-and-bound. 2nd Workshop on Formal Verification of Machine Learning (WFVML 2023) (2023).
[30]
Singh, G., Gehr, T., Mirman, M., Püschel, M., and Vechev, M. T. Fast and effective robustness certification. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3--8, 2018, Montréal, Canada (2018), S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., pp. 10825--10836.
[31]
Singh, G., Gehr, T., Püschel, M., and Vechev, M. T. An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3, POPL (2019), 41:1--41:30.
[32]
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. J., and Fergus, R. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14--16, 2014, Conference Track Proceedings (2014), Y. Bengio and Y. LeCun, Eds.
[33]
Wang, S., Zhang, H., Xu, K., Lin, X., Jana, S., Hsieh, C., and Kolter, J. Z. Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6--14, 2021, virtual (2021), M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan, Eds., pp. 29909--29921.
[34]
Xu, K., Zhang, H., Wang, S., Wang, Y., Jana, S., Lin, X., and Hsieh, C. Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. CoRR abs/2011.13824 (2020).
[35]
Zhang, H., Wang, S., Xu, K., Li, L., Li, B., Jana, S., Hsieh, C.-J., and Kolter, J. Z. General cutting planes for bound-propagation-based neural network verification. Advances in Neural Information Processing Systems (2022).
[36]
Zhang, H., Wang, S., Xu, K., Wang, Y., Jana, S., Hsieh, C.-J., and Kolter, Z. A branch and bound framework for stronger adversarial attacks of ReLU networks. In Proceedings of the 39th International Conference on Machine Learning (2022), vol. 162, pp. 26591--26604.
[37]
Zhang, H., Weng, T., Chen, P., Hsieh, C., and Daniel, L. Efficient neural network robustness certification with general activation functions. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3--8, 2018, Montréal, Canada (2018), S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., pp. 4944--4953.
[38]
Zhao, Z., Zhang, Y., Chen, G., Song, F., Chen, T., and Liu, J. CLEVEREST: accelerating cegar-based neural network verification via adversarial attacks. In Static Analysis - 29th International Symposium, SAS 2022, Auckland, New Zealand, December 5--7, 2022, Proceedings (2022), G. Singh and C. Urban, Eds., vol. 13790 of Lecture Notes in Computer Science, Springer, pp. 449--473.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASE '24: Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering
October 2024
2587 pages
ISBN:9798400712487
DOI:10.1145/3691620
Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 October 2024

Check for updates

Author Tags

  1. deep neural networks
  2. abstraction
  3. formal verification

Qualifiers

  • Research-article

Funding Sources

  • This project was partly funded by the HUJI-IITD MFIRP Scheme

Conference

ASE '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 82 of 337 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 65
    Total Downloads
  • Downloads (Last 12 months)65
  • Downloads (Last 6 weeks)11
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media