×
Jul 13, 2020 · In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many ...
At the heart of our result is a novel technique, called random check-in, that relies only on randomness independently generated by each individual client ...
Jul 30, 2020 · We show that distributed learning protocols based on random check-ins can attain privacy gains similar to privacy amplification by subsampling/ ...
This paper focuses on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices ...
Dec 6, 2020 · Our main contribution is the random check-in distributed protocol, which crucially relies only on randomized participation decisions made ...
Jul 15, 2020 · In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed ...
Summary and Contributions: This paper studies privacy amplification for differentially private stochastic gradient descent (DP-SGD) in the setting of federated ...
Abstract: Differentially Private Stochastic Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive ...
This paper provides a new privacy amplification technique for differential privacy. The proposed solution is a novel contribution to the literature. During the ...