Learning DNN Abstractions using Gradient Descent
Abstract
References
Index Terms
- Learning DNN Abstractions using Gradient Descent
Recommendations
Annealed gradient descent for deep learning
AbstractIn this paper, we propose a novel annealed gradient descent (AGD) algorithm for deep learning. AGD optimizes a sequence of gradually improving smoother mosaic functions that approximate the original non-convex objective function according to an ...
Abstract Neural Networks
Static AnalysisAbstractDeep Neural Networks (DNNs) are rapidly being applied to safety-critical domains such as drone and airplane control, motivating techniques for verifying the safety of their behavior. Unfortunately, DNN verification is NP-hard, with current ...
Backtracking Gradient Descent Method and Some Applications in Large Scale Optimisation. Part 2: Algorithms and Experiments
AbstractIn this paper, we provide new results and algorithms (including backtracking versions of Nesterov accelerated gradient and Momentum) which are more applicable to large scale optimisation as in Deep Neural Networks. We also demonstrate that ...
Comments
Information & Contributors
Information
Published In
- General Chair:
- Vladimir Filkov,
- Program Co-chairs:
- Baishakhi Ray,
- Minghui Zhou
Sponsors
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
Funding Sources
- This project was partly funded by the HUJI-IITD MFIRP Scheme
Conference
Acceptance Rates
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 65Total Downloads
- Downloads (Last 12 months)65
- Downloads (Last 6 weeks)11
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in