FedReview: A Review Mechanism for Rejecting Poisoned Updates in Federated Learning

T Zheng, B Li - arXiv preprint arXiv:2402.16934, 2024 - arxiv.org
arXiv preprint arXiv:2402.16934, 2024arxiv.org
Federated learning has recently emerged as a decentralized approach to learn a high-
performance model without access to user data. Despite its effectiveness, federated learning
gives malicious users opportunities to manipulate the model by uploading poisoned model
updates to the server. In this paper, we propose a review mechanism called FedReview to
identify and decline the potential poisoned updates in federated learning. Under our
mechanism, the server randomly assigns a subset of clients as reviewers to evaluate the …
Federated learning has recently emerged as a decentralized approach to learn a high-performance model without access to user data. Despite its effectiveness, federated learning gives malicious users opportunities to manipulate the model by uploading poisoned model updates to the server. In this paper, we propose a review mechanism called FedReview to identify and decline the potential poisoned updates in federated learning. Under our mechanism, the server randomly assigns a subset of clients as reviewers to evaluate the model updates on their training datasets in each round. The reviewers rank the model updates based on the evaluation results and count the number of the updates with relatively low quality as the estimated number of poisoned updates. Based on review reports, the server employs a majority voting mechanism to integrate the rankings and remove the potential poisoned updates in the model aggregation process. Extensive evaluation on multiple datasets demonstrate that FedReview can assist the server to learn a well-performed global model in an adversarial environment.
arxiv.org