Prof. Dr. Sebastian Pokutta (Georgia Institute of Technology)
Monday, March 19, 2018 - 14:00
Technische Universität Berlin
Straße des 17. Juni 136, 10623 Berlin, MA 041
Conditional Gradient Descent methods are popular first-order methods for (smooth) constraint convex minimization and are heavily used in Machine Learning. Relying on a linear programming oracle, these methods often outperform projected gradient descent methods whenever projections into the feasible region are expensive. Unfortunately, even those methods might suffer from prohibitive running times if the linear programming oracle itself is expensive, e.g., when the feasible region corresponds to a hard combinatorial optimization problem. In this talk, we will explore a general method to significantly speed-up conditional gradient descent methods by replacing the linear programming oracle with a significantly easier and cheaper oracle leading to real-world speedups by several orders of magnitude while maintaining identical theoretical converge rates modulo (small!) constant factors. Moreover, we will further outline a conditionally accelerated lazy stochastic gradient descent method (CAL-SGD) that achieves optimal bounds in terms of required gradient evaluations and calls to the new oracle matching those for the more complex linear programming oracle. (based on joint works with Gábor Braun, George Lan, Yi Zhou, Daniel Zink)
submitted by Dorothea Kiefer (