Tight auditing of differentially private machine learning

M Nasr, J Hayes, T Steinke, B Balle, F Tramèr… - 32nd USENIX Security …, 2023 - usenix.org
32nd USENIX Security Symposium (USENIX Security 23), 2023usenix.org
Auditing mechanisms for differential privacy use probabilistic means to empirically estimate
the privacy level of an algorithm. For private machine learning, existing auditing
mechanisms are tight: the empirical privacy estimate (nearly) matches the algorithm's
provable privacy guarantee. But these auditing techniques suffer from two limitations. First,
they only give tight estimates under implausible worst-case assumptions (eg, a fully
adversarial dataset). Second, they require thousands or millions of training runs to produce …
Abstract
Auditing mechanisms for differential privacy use probabilistic means to empirically estimate the privacy level of an algorithm. For private machine learning, existing auditing mechanisms are tight: the empirical privacy estimate (nearly) matches the algorithm's provable privacy guarantee. But these auditing techniques suffer from two limitations. First, they only give tight estimates under implausible worst-case assumptions (eg, a fully adversarial dataset). Second, they require thousands or millions of training runs to produce nontrivial statistical estimates of the privacy leakage.
usenix.org
Showing the best result for this search. See all results