Abstract
This talk will discuss recent developments concerning the speed of convergence of two iconic algorithms for convex optimization, namely the projected gradient and the conditional gradient algorithm. Both of these algorithms and many variants of them are used extensively to solve large-scale optimization problems in a variety of disciplines.
We show that under suitable conditions on the objective function and the constraint set, the projected gradient and the conditional gradient algorithms converge linearly. Furthermore, we show that the rate of convergence is determined by a suitable kind of "relative condition number" that captures certain key geometric properties of the problem.
The talk is based on joint work with the following collaborators: David Gutman (Carnegie Mellon), Daniel Rodriguez (Google), Juan Vera (Tilburg), and Luis Zuluaga (Lehigh).
Acerca del expositor
Accesa a la información haciendo clic
aquí
Mayores informes:
Maestría en Ciencias en Finanzas