server

Distributed


Machine Learning



Classical machine learning methods, include stochastic gradient descent (also known as backprop), work great on one machine, but don’t scale well to the cloud or cluster setting. We propose a variety of algorithmic frameworks for scaling machine learning across many workers. Many of our distributed ML experiments are done using USNA’s Grace Supercomputer, which is currently hosted at University of Maryland.

Variance reduction SGD

Stochastic Gradient Descent (SGD) has become one of the most popular optimization methods for training machine learning models on massive datasets. However, SGD suffers from two main drawbacks: (i) The noisy gradient updates have high variance, which slows down convergence as the iterates approach the optimum, and (ii) SGD scales poorly in distributed settings, typically experiencing rapidly decreasing marginal benefits as the number of workers increases. In this paper, we propose a highly parallel method, CentralVR, that uses error corrections to reduce the variance of SGD gradient updates, and scales linearly with the number of worker nodes. CentralVR enjoys low iteration complexity, provably linear convergence rates, and exhibits linear performance gains up to hundreds of cores for massive datasets. We compare CentralVR to state-of-the-art parallel stochastic optimization methods on a variety of models and datasets, and find that our proposed methods exhibit stronger scaling than other SGD variants.

Efficient Distributed SGD with Variance Reduction
Soham De, Tom Goldstein

Big batch SGD

Classical stochastic gradient methods for optimization rely on noisy gradient approximations that become progressively less accurate as iterates approach a solution. The large noise and small signal in the resulting gradients makes it difficult to use them for adaptive stepsize selection and automatic stopping. We propose alternative “big batch” SGD schemes that adaptively grow the batch size over time to maintain a nearly constant signal-to-noise ratio in the gradient approximation. The resulting methods have similar convergence rates to classical SGD methods without requiring convexity of the objective function. The high fidelity gradients enable automated learning rate selection and do not require stepsize decay. For this reason, big batch methods are easily automated and can run with little or no user oversight.

Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Soham De, Abhay Yadav, David Jacobs, Tom Goldstein

Transpose reduction

Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data. We propose iterative methods that solve {\em global} sub-problems over an entire distributed dataset. This is possible using transpose reduction strategies that allow a single node to solve least-squares over massive datasets without putting all the data in one place. This results in simple iterative methods that avoid the expensive inner loops required for consensus methods. To demonstrate the efficiency of this approach, we fit linear classifiers and sparse linear models to datasets over 5 Tb in size using a distributed implementation with over 7000 cores in far less time than previous approaches.

Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction
Tom Goldstein, Gavin Taylor, Kawika Barabin, Kent Sayre

Neural nets without gradients

With the growing importance of large network models and enormous training datasets, GPUs have become increasingly necessary to train neural networks. This is largely because conventional optimization algorithms rely on stochastic gradient methods that don’t scale well to large numbers of cores in a cluster setting. Furthermore, the convergence of all gradient methods, including batch methods, suffers from common problems like saturation effects, poor conditioning, and saddle points. This paper explores an unconventional training method that uses alternating direction methods and Bregman iteration to train networks without gradient descent steps. The proposed method reduces the network training problem to a sequence of minimization sub-steps that can each be solved globally in closed form. The proposed method is advantageous because it avoids many of the caveats that make gradient methods slow on highly non-convex problems. The method exhibits strong scaling in the distributed setting, yielding linear speedups even when split over thousands of cores.

Training Neural Networks Without Gradients: A Scalable ADMM Approach
Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, Tom Goldstein