TY - GEN
T1 - Asynchronous byzantine machine learning (the case of sgd)
AU - Damaskinos, Georgios
AU - Mhamdi, El Mahdi El
AU - Guerraoui, Rachid
AU - Patra, Rhicheek
AU - Taziki, Mahsa
N1 - Publisher Copyright:
© 2018 35th International Conference on Machine Learning, ICML 2018. All rights reserved.
PY - 2018/1/1
Y1 - 2018/1/1
N2 - Asynchronous distributed machine learning solutions have proven very effective so far, but always assuming pcrfcctly functioning workers. In practice, some of the workers can however ex-hibit Byzantine behavior, caused by hardware failures, software bugs, corrupt data, or even Mali-cious attacks. We introduce Kardam, the first distributed asynchronous stochastic gradient descent (SGD) algorithm that copes with Byzantine workers. Kardam consists of two complementary components: A filtering and a dampening component. The first is scalar-based and ensures resilience against ^ Byzantine workers. Essentially, this filter leverages the Lipschitzness of cost functions and acts as a self-stabilizer against Byzantine workers that would attempt to corrupt the progress of SGD. The dampening component bounds the convergence rate by adjusting to stale information through a generic gradient weighting schcmc. We prove that Kardam guarantees almost sure convergence in the presence of asynchrony and Byzantine behavior, and we derive its convergence rate. We evaluate Kardam on the CIFAR- 100 and EMNIST datasets and measure its overhead with respect to non Byzantine-resilient solutions. We empirically show that Kardam does not introduce additional noise to the learning procedure but does induce a slowdown (the cost of Byzantine resilience) that wc both thcorctically and empirically show to be less than //n, where/ is the number of Byzantine failures tolerated and n the total number of workers. Interestingly, we also empirically observe that the dampening component is interesting in its own right for it enables to build an SGD algorithm that outperforms alternative staleness-aware asynchronous competitors in environments with honest workers.
AB - Asynchronous distributed machine learning solutions have proven very effective so far, but always assuming pcrfcctly functioning workers. In practice, some of the workers can however ex-hibit Byzantine behavior, caused by hardware failures, software bugs, corrupt data, or even Mali-cious attacks. We introduce Kardam, the first distributed asynchronous stochastic gradient descent (SGD) algorithm that copes with Byzantine workers. Kardam consists of two complementary components: A filtering and a dampening component. The first is scalar-based and ensures resilience against ^ Byzantine workers. Essentially, this filter leverages the Lipschitzness of cost functions and acts as a self-stabilizer against Byzantine workers that would attempt to corrupt the progress of SGD. The dampening component bounds the convergence rate by adjusting to stale information through a generic gradient weighting schcmc. We prove that Kardam guarantees almost sure convergence in the presence of asynchrony and Byzantine behavior, and we derive its convergence rate. We evaluate Kardam on the CIFAR- 100 and EMNIST datasets and measure its overhead with respect to non Byzantine-resilient solutions. We empirically show that Kardam does not introduce additional noise to the learning procedure but does induce a slowdown (the cost of Byzantine resilience) that wc both thcorctically and empirically show to be less than //n, where/ is the number of Byzantine failures tolerated and n the total number of workers. Interestingly, we also empirically observe that the dampening component is interesting in its own right for it enables to build an SGD algorithm that outperforms alternative staleness-aware asynchronous competitors in environments with honest workers.
UR - https://www.scopus.com/pages/publications/85057280451
M3 - Conference contribution
AN - SCOPUS:85057280451
T3 - 35th International Conference on Machine Learning, ICML 2018
SP - 1829
EP - 1858
BT - 35th International Conference on Machine Learning, ICML 2018
A2 - Krause, Andreas
A2 - Dy, Jennifer
PB - International Machine Learning Society (IMLS)
T2 - 35th International Conference on Machine Learning, ICML 2018
Y2 - 10 July 2018 through 15 July 2018
ER -