« A Positive Outlook on Negative Curvature
August 15, 2018, 10:15 AM - 10:45 AM
Daniel Robinson, Johns Hopkins University
The recent surge in interest in nonconvex models (e.g., in deep learning, subspace clustering, and dictionary learning) emphasizes a need for a fresh look at nonconvex optimization algorithms with provable convergence guarantees. A major factor in the design of such methods is the manner in which negative curvature is handled. In this talk, I present recent work that supports the following claims: (i) Commonly employed strategies for using negative curvature directions usually hurt algorithm performance; (ii) A new strategy based on upper-bounding models allows directions of negative curvature to be used while improving performance; and (iii) This strategy of using upper-bounding models is readily adapted for stochastic optimization, thus making it an attractive approach for large-scale "big data" problems. The talk also touches on worst-case complexity bounds and the pitfalls of attempting to associate such bounds with practical performance.