Passer à la navigation principale Passer à la recherche Passer au contenu principal

SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments

Résultats de recherche: Contribution à un journalArticleRevue par des pairs

Résumé

While Federated Learning (FL) and Split Learning (SL) aim to uphold data confidentiality by localized training, they remain susceptible to adversarial threats such as model poisoning and sophisticated inference attacks. To mitigate these vulnerabilities, we propose (Formula presented.), a secure and privacy-preserving framework for Federated Split Learning (FSL). By integrating (Formula presented.) secure Fully Homomorphic Encryption (FHE) with Differential Privacy (DP), (Formula presented.) establishes a defense-in-depth strategy that minimizes information leakage and thwarts reconstructive inference attempts. The framework accommodates heterogeneous model architectures by allowing clients to collaboratively train only the common top layers while keeping their bottom layers exclusive to each participant. This partitioning strategy ensures that the layers closest to the sensitive input data are never exposed to the centralized server. During the training phase, participants utilize multi-key CKKS FHE to facilitate secure weight aggregation, which ensures that no single entity can access individual updates in plaintext. For collaborative inference, clients exchange activations protected by single-key CKKS FHE to achieve a consensus derived from Total Labels (TL) or Total Predictions (TP). This consensus mechanism enhances decision reliability by aggregating decentralized insights while obfuscating soft-label confidence scores that could be exploited by attackers. Our empirical evaluation demonstrates that (Formula presented.) provides substantial defense against Membership Inference (MI) attacks, reduces temporal training costs compared to standard encrypted FL, and improves inference precision via its consensus mechanism, all while maintaining a negligible impact on federation overhead.

langue originaleAnglais
Numéro d'article267
journalElectronics (Switzerland)
Volume15
Numéro de publication2
Les DOIs
étatPublié - 1 janv. 2026

Empreinte digitale

Examiner les sujets de recherche de « SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation