TY - JOUR
T1 - SplitML
T2 - A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments
AU - Trivedi, Devharsh
AU - Boudguiga, Aymen
AU - Kaaniche, Nesrine
AU - Triandopoulos, Nikos
N1 - Publisher Copyright:
© 2026 by the authors.
PY - 2026/1/1
Y1 - 2026/1/1
N2 - While Federated Learning (FL) and Split Learning (SL) aim to uphold data confidentiality by localized training, they remain susceptible to adversarial threats such as model poisoning and sophisticated inference attacks. To mitigate these vulnerabilities, we propose (Formula presented.), a secure and privacy-preserving framework for Federated Split Learning (FSL). By integrating (Formula presented.) secure Fully Homomorphic Encryption (FHE) with Differential Privacy (DP), (Formula presented.) establishes a defense-in-depth strategy that minimizes information leakage and thwarts reconstructive inference attempts. The framework accommodates heterogeneous model architectures by allowing clients to collaboratively train only the common top layers while keeping their bottom layers exclusive to each participant. This partitioning strategy ensures that the layers closest to the sensitive input data are never exposed to the centralized server. During the training phase, participants utilize multi-key CKKS FHE to facilitate secure weight aggregation, which ensures that no single entity can access individual updates in plaintext. For collaborative inference, clients exchange activations protected by single-key CKKS FHE to achieve a consensus derived from Total Labels (TL) or Total Predictions (TP). This consensus mechanism enhances decision reliability by aggregating decentralized insights while obfuscating soft-label confidence scores that could be exploited by attackers. Our empirical evaluation demonstrates that (Formula presented.) provides substantial defense against Membership Inference (MI) attacks, reduces temporal training costs compared to standard encrypted FL, and improves inference precision via its consensus mechanism, all while maintaining a negligible impact on federation overhead.
AB - While Federated Learning (FL) and Split Learning (SL) aim to uphold data confidentiality by localized training, they remain susceptible to adversarial threats such as model poisoning and sophisticated inference attacks. To mitigate these vulnerabilities, we propose (Formula presented.), a secure and privacy-preserving framework for Federated Split Learning (FSL). By integrating (Formula presented.) secure Fully Homomorphic Encryption (FHE) with Differential Privacy (DP), (Formula presented.) establishes a defense-in-depth strategy that minimizes information leakage and thwarts reconstructive inference attempts. The framework accommodates heterogeneous model architectures by allowing clients to collaboratively train only the common top layers while keeping their bottom layers exclusive to each participant. This partitioning strategy ensures that the layers closest to the sensitive input data are never exposed to the centralized server. During the training phase, participants utilize multi-key CKKS FHE to facilitate secure weight aggregation, which ensures that no single entity can access individual updates in plaintext. For collaborative inference, clients exchange activations protected by single-key CKKS FHE to achieve a consensus derived from Total Labels (TL) or Total Predictions (TP). This consensus mechanism enhances decision reliability by aggregating decentralized insights while obfuscating soft-label confidence scores that could be exploited by attackers. Our empirical evaluation demonstrates that (Formula presented.) provides substantial defense against Membership Inference (MI) attacks, reduces temporal training costs compared to standard encrypted FL, and improves inference precision via its consensus mechanism, all while maintaining a negligible impact on federation overhead.
KW - differential privacy
KW - federated learning
KW - fully homomorphic encryption
KW - privacy-preserving machine learning
KW - split learning
UR - https://www.scopus.com/pages/publications/105028658190
U2 - 10.3390/electronics15020267
DO - 10.3390/electronics15020267
M3 - Article
AN - SCOPUS:105028658190
SN - 2079-9292
VL - 15
JO - Electronics (Switzerland)
JF - Electronics (Switzerland)
IS - 2
M1 - 267
ER -