IPUMS.org Home Page

BIBLIOGRAPHY

Publications, working papers, and other research using data resources from IPUMS.

Full Citation

Title: Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization

Citation Type: Miscellaneous

Publication Year: 2021

ISBN: 2101.11799v1

Abstract: Federated learning (FL), as a type of distributed machine learning frameworks, is vulnerable to external attacks on FL models during parameters transmissions. An attacker in FL may control a number of participant clients, and purposely craft the uploaded model parameters to manipulate system outputs, namely, model poisoning (MP). In this paper, we aim to propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms (e.g., Krum and Trimmed mean) implemented at the server without being noticed, i.e., covert MP (CMP). Specifically, we first formulate the MP as an optimization problem by minimizing the Euclidean distance between the manipulated model and designated one, constrained by a defensive aggregation rule. Then, we develop CMP algorithms against different defensive mechanisms based on the solutions of their corresponding optimization problems. Furthermore, to reduce the optimization complexity, we propose low complexity CMP algorithms with a slight performance degradation. In the case that the attacker does not know the defensive aggregation mechanism, we design a blind CMP algorithm, in which the manipulated model will be adjusted properly according to the aggregated model generated by the unknown defensive aggregation. Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.

Url: https://arxiv.org/pdf/2101.11799.pdf

User Submitted?: No

Authors: Wei, Kang; Li, Jun; Ding, Ming; Ma, Chuan; Jeon, Yo-Seb; Vincent Poor, H

Publisher: School of Electrical and Optical Engineering

Data Collections: IPUMS USA

Topics: Other, Population Data Science

Countries:

IPUMS NHGIS NAPP IHIS ATUS Terrapop