MATH6209 - Special Topics in Mathematics: Nonsmooth optimization with applications in inverse problems and imaging

For more information about this course visit the Programs & courses website

Nonsmooth optimization refers to the general problem of minimizing functions that are typically not differentiable  at their minimizers. These kinds of optimization problems occur in many applied fields, including imaging, inverse problems, and machine learning. Since the classical theory of optimization presumes certain differentiability for the functions to be optimized, it cannot be directly utilized. In this course we will introduce the theory of nonsmooth optimization and to present the current state of numerical nonsmooth optimization. We will focus on the nonsmooth convex optimization problems in which the objective function is convex.

In this course we start with examples to indicate the importance of nonsmooth optimization, we then introduce elements of convex analysis, subdifferential calculus and proximal mapping. We next present various numerical algorithms to solve nonsmooth optimization problems including the (accelerated) proximal gradient algorithms, the augment Lagrangian method, the alternating direction methods of multipliers, the primal-dual hybrid gradient methods and the semi-smooth Newton methods.  We finally present applications in inverse problems and imaging to display that nonsmooth optimization can produce better results than smooth optimization.

We will try our best to make this course as self-contained as possible. This course only requires students have solid background on linear algebra and multivariable calculus. Some background on functional analysis and optimization will be helpful but not essential.

For further information please contact Qinian Jin.

 

Updated:  29 April 2017/Responsible Officer:  Director/Page Contact:  School Manager