Publications

2019

signSGD with majority vote is communication efficient and fault tolerant
Jeremy Bernstein, Jiawei Zhao, Kamyar Azizzadenesheli & Anima Anandkumar
[pdf] [code] [cite]ICLR '19
@InProceedings{bernstein_majority, title = {sign{SGD} with {M}ajority {V}ote is {C}ommunication {E}fficient and {F}ault {T}olerant}, author = {Bernstein, Jeremy and Wang, Yu-Xiang and Azizzadenesheli, Kamyar and Anandkumar, Animashree}, booktitle = {International Conference on Learning Representations (ICLR-19)}, year = {2019} }
We show that when the parameter server aggregates gradient signs by majority vote, the resulting distributed optimisation scheme is both communication efficient and robust to potential network and machine errors.

2018

signSGD: compressed optimisation for non-convex problems
Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli & Anima Anandkumar
[pdf] [poster] [slides] [code] [cite]ICML '18 long talk
@InProceedings{bernstein_signum, title = {sign{SGD}: {C}ompressed {O}ptimisation for {N}on-{C}onvex {P}roblems}, author = {Bernstein, Jeremy and Wang, Yu-Xiang and Azizzadenesheli, Kamyar and Anandkumar, Animashree}, booktitle = {International Conference on Machine Learning (ICML-18)}, year = {2018} }
We exploit the natural geometry of neural net error landscapes to develop an optimiser that converges as fast as SGD whilst providing cheap gradient communication for distributed training.