Anomaly detection plays a critical role in ensuring the robustness and reliability of federated learning (FL) systems involving distributed implementation of stochastic gradient descent (SGD). Existing methods in the literature usually apply norm-based gradient filters in each iteration and eliminate possible outliers, which can be ineffective in a setting with heterogeneous and unbalanced training data. We propose a heuristic yet novel scheme for adjusting the weights in the gradient aggregation step that accounts for two anomaly metrics, namely the relative distance and the convergence measure. Simulation results show that our proposed scheme brings notable performance gain compared to norm-based policies when the agents have distinct data distributions.
Funding agencies: This work was supported in part by Centrum for Industriell Information- ¨steknologi (CENIIT), Excellence Center at Linkoping - Lund in Information ¨Technology (ELLIIT), and Knut and Alice Wallenberg (KAW) Foundation