Regularised black-box optimisation algorithms for least-squares problems
Least-squares problems are one of the most common types of optimisation problem arising in applications (e.g. parameter fitting). Standard optimisation algorithms for solving these problems are well-understood, but require the user to be able to provide derivatives of the objective. In settings where the objective is black-box, expensive to evaluate or noisy, such as climate modelling, this may be impractical or impossible, and we must consider derivative-free optimisation methods.
The aim of this project is to examine methods for extending derivative-free algorithms for least-squares problems to include (possibly nonsmooth) regularisation terms with known structure. This is a very common feature of parameter fitting problems, and is often used to avoid overfitting or to encode prior information about the solution (e.g. sparsity). Using regularisation is well-studied in standard settings, but has received relatively little attention in derivative-free optimisation.
This project would suit students interested in numerical analysis, optimisation, or scientific computing. Experience in programming, particularly numerical/scientific computing in Python, would be useful.
Bibliography:
- C. Cartis, J. Fiala, B. Marteau, and L. Roberts, Improving the flexibility and robustness of model-based derivative-free optimization solvers, ACM Transactions on Mathematical Software, to appear (2019).
- A. Chambolle, T. Pock, An introduction to continuous optimization for imaging, Acta Numerica, 25 (2016), pp. 161-319.
- W. L. Hare, Y. Lucet, Derivative-Free Optimization Via Proximal Point Methods, Journal of Optimization Theory and Applications, 160 (2014), pp. 204–220.
- R. Garmanjani, D. Júdice, and L. N. Vicente, Trust-region methods without using derivatives: Worst case complexity and the nonsmooth case, SIAM Journal on Optimization, 26 (2016), pp. 1987–2011.






