Unlocking high-accuracy differentially private image classification through scale
Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with
access to a machine learning model from extracting information about individual training
points. Differentially Private Stochastic Gradient Descent (DP-SGD), the most popular DP
training method for deep learning, realizes this protection by injecting noise during training.
However previous works have found that DP-SGD often leads to a significant degradation in
performance on standard image classification benchmarks. Furthermore, some authors have …
access to a machine learning model from extracting information about individual training
points. Differentially Private Stochastic Gradient Descent (DP-SGD), the most popular DP
training method for deep learning, realizes this protection by injecting noise during training.
However previous works have found that DP-SGD often leads to a significant degradation in
performance on standard image classification benchmarks. Furthermore, some authors have …
Showing the best results for this search. See all results