

Course Information Schedule: TuTh 12:30pm  1:45pm Instructor: Tom Goldstein Office Hours: Th 23pm, AVW 3141 TA: Neal Gupta: (nguptateaching gmail com) Office Hours: M 34pm, AVW 4103 Homework All homework should be submitted on the UMD submit server. Instructions can be found here (courtesy of Neal). homework 1: gradients and fourier transforms homework 2: descent methods homework 3: forwardbackward splitting homework 4: ADMM Course Description This is an introductory survey of optimization for computer scientists. Special emphasis will be put methods with applications in machine learning, model fitting, and image processing. The format of the class is split between a traditional lecture course where the instructor presents material, and a reading course where students present papers. There are no formal prerequisites for this course, however students should have a strong background in applied mathematics (especially linear algebra) and computer programming. Students' grades will be based on completion of the following:
Topics covered in lectures will include: multivariable calculus and optimality conditions, gradient methods, interior point methods, splitting methods, and stochastic optimization. Applications covered will include: fitting generalized linear models, sparse regression methods, matrix factorizations, neural networks, support vector machines, and more. Book & Other Sources All course materials are available for free online. Suggested reading material for various topics includes: Numerical Linear Algebra: Numerical Linear Algebra by Trefethen and Bau L1 models and sparsity: Sparse modeling for Image and Vision Processing Convex functions and gradient methods: Convex Optimization by Boyd and Vandenberghe Proximal methods: A Field Guide to ForwardBackward Splitting ADMM: Fast Alternating Direction Optimization Methods Consensus ADMM: Distributed Optimization and Statistical Learning Unwrapped ADMM: Unwrapping ADMM PDHG: Adaptive PrimalDual Hybrid Gradient Methods SGD: Stochastic Gradient Descent for NonSmooth Optimization MonteCarlo: An Introduction to MCMC for Machine Learning Barrier Methods: Convex Optimization by Boyd and Vandenberghe, chapter 11 PrimalDual Interior Point Methods: Nocedal and Wright, chapter 14 Semidefinite programming: Vandenbergh and Boyd Metric learning: Distance Metric Learning for LMNN Topics Covered Course Overview, linear algebra overview Sparse models and L1 optimization Total variation, calculus, and FFT Solvers for Linear problems. Convex functions Standard form Problems Unconstrained optimization Compressed sensing Duality Splitting methods Interior point methods Highdimensional statistics Stochastic methods (backprop, SGD, MonteCarlo methods) Integer programming Derivativefree methods Semidefinite programming The course will focus on methods applicable to: Sparse leastsquares/Lasso, total variation image processing, deconvolutions, sparse+low rank approximations, support vector machines, factor analysis, neural nets, logistic regression, Linfinity regularized problems. Restricted Access Calendar Lecture Slides 