TY - GEN
T1 - Poster
T2 - 31st ACM SIGSAC Conference on Computer and Communications Security, CCS 2024
AU - Athanasiou, Andreas
AU - Jung, Kangsoo
AU - Palamidessi, Catuscia
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/12/9
Y1 - 2024/12/9
N2 - Federated Learning (FL) enables clients to train a joint model without disclosing their local data. Instead, they share their local model updates with a central server that moderates the process and creates a joint model. However, FL is susceptible to a series of privacy attacks. Recently, the source inference attack (SIA) has been proposed where an honest-but-curious central server tries to identify exactly which client owns a specific data record. In this work, we propose a defense against SIAs by using a trusted shuffler, without compromising the accuracy of the joint model. We employ a combination of unary encoding with shuffling, which can effectively blend all clients’ model updates, preventing the central server from inferring information about each client’s model update separately. In order to address the increased communication cost of unary encoding we employ quantization. Our preliminary experiments show promising results; the proposed mechanism notably decreases the accuracy of SIAs without compromising the accuracy of the joint model.
AB - Federated Learning (FL) enables clients to train a joint model without disclosing their local data. Instead, they share their local model updates with a central server that moderates the process and creates a joint model. However, FL is susceptible to a series of privacy attacks. Recently, the source inference attack (SIA) has been proposed where an honest-but-curious central server tries to identify exactly which client owns a specific data record. In this work, we propose a defense against SIAs by using a trusted shuffler, without compromising the accuracy of the joint model. We employ a combination of unary encoding with shuffling, which can effectively blend all clients’ model updates, preventing the central server from inferring information about each client’s model update separately. In order to address the increased communication cost of unary encoding we employ quantization. Our preliminary experiments show promising results; the proposed mechanism notably decreases the accuracy of SIAs without compromising the accuracy of the joint model.
KW - Federated Learning
KW - Shuffling
KW - Source Inference Attack
KW - Unary Encoding
UR - https://www.scopus.com/pages/publications/85215509220
U2 - 10.1145/3658644.3691411
DO - 10.1145/3658644.3691411
M3 - Conference contribution
AN - SCOPUS:85215509220
T3 - CCS 2024 - Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security
SP - 5036
EP - 5038
BT - CCS 2024 - Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery, Inc
Y2 - 14 October 2024 through 18 October 2024
ER -