< Back to previous page

Project

Efficient Algorithms for Large-Scale Minimax Problems

Efficient algorithms for nonconvex minimax problems is the subject of investigation in this proposal. Such problems have recently emerged in a diverse range of applications related to robust learning and dynamical decision making in uncertain environments. For example, adversarial training that addresses the issue of sensitivity of machine learning models, or distributionally robust optimization that tackles decision making under uncertainty, and many other applications are all formulated as minimax problems. Until very recently, algorithms for solving such potentially nonconvex minimax problems have had little (if any) convergence guarantees, and have often been only addressed by means of heuristics.

To tackle such problems we leverage techniques from nonsmooth nonconvex analysis and develop efficient proximal algorithms with certified convergence guarantees. Furthermore, on the one hand block-coordinate and stochastic variants are developed to cope with large-scale problems; on the other hand, for the sake of efficiency as well as to account for frequent issues such as ill conditioning of the underlying problems, Newton-type algorithms are proposed that nevertheless use simple first-order oracles. The developed algorithms are applied to applications in machine learning as well as distributionally robust model predictive control, resulting in state-of-the-art fast and robust algorithms. Finally, open-source software packages are developed for all the developed algorithms.

Date:1 Oct 2021 →  Today
Keywords:Minimax Optimization, Nonsmooth Nonconvex Optimization, Proximal Algorithms
Disciplines:Calculus of variations and optimal control, optimisation, Operations research and mathematical programming, Systems theory, control, Machine learning and decision making