Skip to main navigation Skip to search Skip to main content

Asynchronous byzantine machine learning (the case of sgd)

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Asynchronous distributed machine learning solutions have proven very effective so far, but always assuming pcrfcctly functioning workers. In practice, some of the workers can however ex-hibit Byzantine behavior, caused by hardware failures, software bugs, corrupt data, or even Mali-cious attacks. We introduce Kardam, the first distributed asynchronous stochastic gradient descent (SGD) algorithm that copes with Byzantine workers. Kardam consists of two complementary components: A filtering and a dampening component. The first is scalar-based and ensures resilience against ^ Byzantine workers. Essentially, this filter leverages the Lipschitzness of cost functions and acts as a self-stabilizer against Byzantine workers that would attempt to corrupt the progress of SGD. The dampening component bounds the convergence rate by adjusting to stale information through a generic gradient weighting schcmc. We prove that Kardam guarantees almost sure convergence in the presence of asynchrony and Byzantine behavior, and we derive its convergence rate. We evaluate Kardam on the CIFAR- 100 and EMNIST datasets and measure its overhead with respect to non Byzantine-resilient solutions. We empirically show that Kardam does not introduce additional noise to the learning procedure but does induce a slowdown (the cost of Byzantine resilience) that wc both thcorctically and empirically show to be less than //n, where/ is the number of Byzantine failures tolerated and n the total number of workers. Interestingly, we also empirically observe that the dampening component is interesting in its own right for it enables to build an SGD algorithm that outperforms alternative staleness-aware asynchronous competitors in environments with honest workers.

Original languageEnglish
Title of host publication35th International Conference on Machine Learning, ICML 2018
EditorsAndreas Krause, Jennifer Dy
PublisherInternational Machine Learning Society (IMLS)
Pages1829-1858
Number of pages30
ISBN (Electronic)9781510867963
Publication statusPublished - 1 Jan 2018
Externally publishedYes
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018

Publication series

Name35th International Conference on Machine Learning, ICML 2018
Volume3

Conference

Conference35th International Conference on Machine Learning, ICML 2018
Country/TerritorySweden
CityStockholm
Period10/07/1815/07/18

Fingerprint

Dive into the research topics of 'Asynchronous byzantine machine learning (the case of sgd)'. Together they form a unique fingerprint.

Cite this