Publications

2018

signSGD with majority vote is communication efficient and Byzantine fault tolerant
Jeremy Bernstein, Jiawei Zhao, Kamyar Azizzadenesheli & Anima Anandkumar
[pdf] [code] [cite]under review for ICLR '19
@article{bernstein_majority, title = {sign{SGD} with {M}ajority {V}ote is {C}ommunication {E}fficient and {B}yzantine {F}ault {T}olerant}, author = {Bernstein, Jeremy and Zhao, Jiawei and Azizzadenesheli, Kamyar and Anandkumar, Animashree}, journal = {Under review for ICLR-19'}, year = {2018} }
We show that when the parameter server aggregates gradient signs by majority vote, the resulting distributed optimisation scheme is both communication efficient and adversarially robust.
signSGD: compressed optimisation for non-convex problems
Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli & Anima Anandkumar
[pdf] [poster] [slides] [code] [cite]ICML '18 long talk
@InProceedings{bernstein_signum, title = {sign{SGD}: {C}ompressed {O}ptimisation for {N}on-{C}onvex {P}roblems}, author = {Bernstein, Jeremy and Wang, Yu-Xiang and Azizzadenesheli, Kamyar and Anandkumar, Animashree}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, address = {Stockholmsmässan, Stockholm Sweden}, month = {10--15 Jul}, publisher = {PMLR}, }
We exploit the natural geometry of neural net error landscapes to develop an optimiser that converges as fast as SGD whilst providing cheap gradient communication for distributed training.