To join this seminar via Zoom please click here.
If you would like to join the seminar and are not currently affiliated with ANU, please contact Kenneth Duru at firstname.lastname@example.org.
Non-convex optimisation formulations arise naturally in modern data science tasks such as factored matrix optimisation. For such problems, second-order stationary points often correspond to global optima, which motivates the development of second-order optimisation methods with fast convergence capabilities. Meanwhile, the study of acceleration phenomena, initially proposed for convex optimisation, has lead to popular large-scale nonlinear optimisation frameworks being revisited so as to equip them with complexity guarantees while maintaining their practical appeal.
In this talk, we will study the benefits of using Krylov methods within optimization procedures in a non-convex setting. We will particularly focus on linear conjugate gradient and its variations, and we will show that this method is naturally built to detect non-convexity on quadratic problems. We will then move to a general non-convex nonlinear optimisation setting, and describe a Newton-Conjugate Gradient method with best known complexity guarantees. We will also provide numerical illustrations of the performance of the proposed methods.
Clément W. Royer is an associate professor of computer science at Université Paris Dauphine-PSL, France. Clément received his Ph.D. from the University of Toulouse, France, and was then a postdoctoral research associate at the Wisconsin Institute of Discovery, University of Wisconsin-Madison, USA. His research activities revolve around numerical optimization and its applications, especially in complex systems and data science: he currently holds a springboard chair in optimisation at the Paris Artificial Intelligence Research InstitutE (PRAIRIE).