Brief announcement: Byzantine-tolerant machine learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We report on Krum, the first provably Byzantine-tolerant aggregation rule for distributed Stochastic Gradient Descent (SGD). Krum guarantees the convergence of SGD even in a distributed setting where (asymptotically) up to half of the workers can be malicious adversaries trying to attack the learning system.

Original languageEnglish
Title of host publicationPODC 2017 - Proceedings of the ACM Symposium on Principles of Distributed Computing
PublisherAssociation for Computing Machinery
Pages455-458
Number of pages4
ISBN (Electronic)9781450349925
DOIs
Publication statusPublished - 26 Jul 2017
Externally publishedYes
Event36th ACM Symposium on Principles of Distributed Computing, PODC 2017 - Washington, United States
Duration: 25 Jul 201727 Jul 2017

Publication series

NameProceedings of the Annual ACM Symposium on Principles of Distributed Computing
VolumePart F129314

Conference

Conference36th ACM Symposium on Principles of Distributed Computing, PODC 2017
Country/TerritoryUnited States
CityWashington
Period25/07/1727/07/17

Keywords

  • Adversarial machine learning
  • Distributed stochastic gradient descent

Fingerprint

Dive into the research topics of 'Brief announcement: Byzantine-tolerant machine learning'. Together they form a unique fingerprint.

Cite this