Cookies?
Library Header Image
LSE Research Online LSE Library Services

Quantized stochastic gradient descent: communication versus convergence

Alistarh, D., Li, J., Tomioka, R. and Vojnovic, Milan ORCID: 0000-0003-1382-022X (2016) Quantized stochastic gradient descent: communication versus convergence. In: OPT 2016, 2016-12-10, Barcelona, Spain, ESP.

Full text not available from this repository.

Abstract

Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to excellent scalability properties of this algorithm, and to its efficiency in the context of training deep neural networks. A fundamental barrier for parallelizing large-scale SGD is the fact that the cost of communicating the gradient updates between nodes can be very large. Consequently, lossy compression heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always provably converge, and it is not clear whether they are optimal. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes which allow the compression of gradient updates at each node, while guaranteeing convergence under standard assumptions. QSGD allows the user to trade off compression and convergence time: it can communicate a sublinear number of bits per iteration in the model dimension, and can achieve asymptotically optimal communication cost. We complement our theoretical results with empirical data, showing that QSGD can significantly reduce communication cost, while being competitive with standard uncompressed techniques on a variety of real tasks.

Item Type: Conference or Workshop Item (Paper)
Official URL: http://opt-ml.org/oldopt/opt16/papers.html
Additional Information: © 2016 The Authors
Divisions: Statistics
Subjects: H Social Sciences > HA Statistics
Date Deposited: 23 Nov 2017 11:11
Last Modified: 01 Oct 2024 04:02
URI: http://eprints.lse.ac.uk/id/eprint/85701

Actions (login required)

View Item View Item