11.00 am, Lecture Hall 4
Manifold Learning for Escaping Saddle points and Tensor Decomposition
Chennai Mathematical Institute.
One of the key problems in optimization is understanding optima of non-convex functions, because of its both theoretical and practical importance. In general, optimizing a non-convex function is NP-hard. The difficulty arises because of two reasons: finding the global solution among many local solutions and presence of saddle points. First order methods (which rely on the first-order derivative) have been employed before to tackle the problem but are prone to saddle points i.e. grad f(saddle point) = 0. In this talk, we'll see we can still escape saddle points efficiently using first-order methods under mild conditions called "Saddle-Point Property" on the non-convex functions. Along the way, we'll see how Stable Manifold Theorem guarantees that the set of starting points (for first-order methods) that get trapped around saddle points has measure zero and strengthens the intuition for "Saddle-Point Property". Finally, we will talk about the problem of tensor decomposition which is inherently non-convex and how it satisfies "Saddle-Point Property".