Primal-dual hybrid gradient method

The Primal-Dual Hybrid Gradient (PDHG) method, also known as the Chambolle-Pock method, is a powerful splitting method that can solve a wide range of constrained and non-differentiable optimization problems. Unlike the popular ADMM method, the PDHG approach usually does not require expensive minimization sub-steps. We provide adaptive stepsize selection rules that automate the solver, and increase its speed and robustness. The test problems and adaptive stepsize strategies presented here were proposed in our papers Adaptive Primal-Dual Hybrid Gradient Methods for Saddle-Point Problems and Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing.

What does PDHG solve?

PDHG solves general saddle-point problems of the form $$\min_x \max_y f(x) + \langle Ax,y \rangle - g(y)$$ where \(f\) and \(g\) are convex functions, and \(A\) is a linear operator.

In addition to a solver for general problems of this type, the PDHG code distribution comes with wrappers to solve a range of common problems, which are listed below.

\begin{align} \text{Total-Variation denoising}& \qquad \min_x |\nabla x| + \frac{\mu}{2}||x-f||^2 \\ \text{TVL1 denoising}& \qquad \min_x |\nabla x| + \frac{\mu}{2} |x-f|^2 \\ \text{Convex Image Segmentation}& \qquad \min_x |\nabla x| +\mu \langle x,f\rangle \\ \text{Scaled lasso}& \qquad \min_x |x| +\mu ||Ax-b|| \\ \text{Linear programming}& \qquad \min_x \langle x,f\rangle, \text{ subject to }, Ax\le b \\ \text{Democratic representations}& \qquad \min_x ||x||_\infty , \text{ subject to }, ||Ax-b|| \le \mu \end{align} Note that \(|\cdot|\) denotes the \(\ell_1\)-norm, \(||\cdot||\) denotes the \(\ell_2\)-norm, \(||\cdot||_\infty\) denotes the \(\ell_\infty\)-norm, \(A\) repesents an arbitrary data matrix, and \(\mu \) is a positive scalar regularization parameter.

Code

The PDHG code distribution contains a variety of solvers and test scripts for each solver. The main solver is pdhg_adaptive.m, which solves general form saddle-point problems. Wrapper solvers are provided for the test problems described above. The wrappers formulate a specific problem form as a general saddle-point problem, hand the problem instance to pdhg_adaptive.m, and then return the results. The top level folder contains a variety of test scripts, one for each wrapper/solver. When run from the command line, these test scripts will generate a random problem instance, solve the problem using one of the PDHG wrappers, and plot convergence results. Details of how to call each solver are given in the comment block at the top of each file.

The following files are available for download:

Papers:Adaptive Primal-Dual Hybrid Gradient Methods for Saddle-Point Problems and Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing.
Code: Adaptive PDHG solvers

How to cite PDHG

If you find that our adaptive PDHG code has contributed to your published work, please include the following citation:

@inproceedings{goldstein2015adaptive,
  title={Adaptive primal-dual splitting methods for statistical learning and image processing},
  author={Goldstein, Tom and Li, Min and Yuan, Xiaoming},
  booktitle={Advances in Neural Information Processing Systems},
  pages={2089--2097},
  year={2015}
}