TY - GEN
T1 - FADE
T2 - 8th Annual ACM Conference on Fairness, Accountability, and Transparency, FAccT 2025
AU - Bendoukha, Adda Akram
AU - Arcolezi, Héber Hwang
AU - Kaaniche, Nesrine
AU - Boudguiga, Aymen
AU - Sirdey, Renaud
AU - Clet, Pierre Emmanuel
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/6/23
Y1 - 2025/6/23
N2 - In this work, we investigate how unfair updates with opposing biases can cancel each other out during aggregation in federated learning (FL), leading to a fairer overall model from a group fairness perspective. We analytically and empirically analyze this Federated Aggregation with Discrimination Elimination (FADE) phenomenon, considering both linear and nonlinear models. In addition, we build on this observation and introduce two novel fairness-aware FL aggregation strategies. The first strategy, FADE-OptW, uses sequential optimization to optimize weights assigned to each client based on their fairness levels. The second approach, FADE-SSP, identifies the optimal subset of clients that minimizes the weighted average fairness level at each round along the convergence path, and for a given metric. Our experiments demonstrate significant improvements in fairness, achieving up to a 60% reduction in discrimination compared to standard FedAvg-based FL. We achieve these gains while maintaining the model's predictive performance on highly heterogeneous client data distributions.
AB - In this work, we investigate how unfair updates with opposing biases can cancel each other out during aggregation in federated learning (FL), leading to a fairer overall model from a group fairness perspective. We analytically and empirically analyze this Federated Aggregation with Discrimination Elimination (FADE) phenomenon, considering both linear and nonlinear models. In addition, we build on this observation and introduce two novel fairness-aware FL aggregation strategies. The first strategy, FADE-OptW, uses sequential optimization to optimize weights assigned to each client based on their fairness levels. The second approach, FADE-SSP, identifies the optimal subset of clients that minimizes the weighted average fairness level at each round along the convergence path, and for a given metric. Our experiments demonstrate significant improvements in fairness, achieving up to a 60% reduction in discrimination compared to standard FedAvg-based FL. We achieve these gains while maintaining the model's predictive performance on highly heterogeneous client data distributions.
KW - Algorithmic Fairness
KW - Federated Learning
KW - Group fairness
UR - https://www.scopus.com/pages/publications/105010818262
U2 - 10.1145/3715275.3732203
DO - 10.1145/3715275.3732203
M3 - Conference contribution
AN - SCOPUS:105010818262
T3 - ACMF AccT 2025 - Proceedings of the 2025 ACM Conference on Fairness, Accountability,and Transparency
SP - 3182
EP - 3195
BT - ACMF AccT 2025 - Proceedings of the 2025 ACM Conference on Fairness, Accountability,and Transparency
PB - Association for Computing Machinery, Inc
Y2 - 23 June 2025 through 26 June 2025
ER -