Detecting overfitting via adversarial examples
R Werpachowski, A György… - Advances in Neural …, 2019 - proceedings.neurips.cc
Advances in Neural Information Processing Systems, 2019•proceedings.neurips.cc
The repeated community-wide reuse of test sets in popular benchmark problems raises
doubts about the credibility of reported test-error rates. Verifying whether a learned model is
overfitted to a test set is challenging as independent test sets drawn from the same data
distribution are usually unavailable, while other test sets may introduce a distribution shift.
We propose a new hypothesis test that uses only the original test data to detect overfitting. It
utilizes a new unbiased error estimate that is based on adversarial examples generated …
doubts about the credibility of reported test-error rates. Verifying whether a learned model is
overfitted to a test set is challenging as independent test sets drawn from the same data
distribution are usually unavailable, while other test sets may introduce a distribution shift.
We propose a new hypothesis test that uses only the original test data to detect overfitting. It
utilizes a new unbiased error estimate that is based on adversarial examples generated …
Abstract
The repeated community-wide reuse of test sets in popular benchmark problems raises doubts about the credibility of reported test-error rates. Verifying whether a learned model is overfitted to a test set is challenging as independent test sets drawn from the same data distribution are usually unavailable, while other test sets may introduce a distribution shift. We propose a new hypothesis test that uses only the original test data to detect overfitting. It utilizes a new unbiased error estimate that is based on adversarial examples generated from the test data and importance weighting. Overfitting is detected if this error estimate is sufficiently different from the original test error rate. We develop a specialized variant of our test for multiclass image classification, and apply it to testing overfitting of recent models to the popular ImageNet benchmark. Our method correctly indicates overfitting of the trained model to the training set, but is not able to detect any overfitting to the test set, in line with other recent work on this topic.
proceedings.neurips.cc
Showing the best result for this search. See all results