Full Name
Andrea Montanari
Job Title
Professor, Department of Statistics and Department of Mathematics
Company
Stanford University
Speaker Bio
Andrea Montanari is the John D. and Sigrid Banks Professor in Statistics and Mathematics at Stanford University. He received a Laurea degree in Physics in 1997, and a Ph.D. in Physics in 2001 (both from Scuola Normale Superiore in Pisa, Italy). He has been post-doctoral fellow at Laboratoire de Physique Théorique de l'Ecole Normale Supérieure (LPTENS), Paris, France, and the Mathematical Sciences Research Institute, Berkeley, USA. From 2002 to 2010 he was Chargé de Recherche (with Centre National de la Recherche Scientifique, CNRS) at LPTENS. He joined Stanford in 2006, and from 2006 to 2023 has been a faculty in Departments of Electrical Engineering and Statistics. From 2021 to 2023, he was the Robert and Barbara Kleist Professor in the School of Engineering.

He was awarded the CNRS bronze medal for theoretical physics in 2006, the National Science Foundation CAREER award in 2008, the Okawa Foundation Research Grant in 2013, the James L. Massey Award of the Information Theory Society in 2016, and the Le Cam Prize of the French Statistical Society in 2020. He received the ACM SIGMETRICS best paper award in 2008 and the Applied Probability Society Best Publication Award in 2015 He was elevated to IEEE Fellow in 2017 and IMS Fellow in 2020. He was an invited sectional speaker at the 2020 International Congress of Mathematicians and an IMS Medallion lecturer for the 2020 Bernoulli-IMS World Congress.
Abstract
I will discuss the problem of solving a system of equations F(x)=0, for x a d-dimensional unit vectors and D a non-linear map from R^d to R^n whose components are independent, rotationally invariant Gaussian processes. We studied this problem under the proportional asymptotics in which n and d goes to diverge, with their ratio converging to alpha>0. I will present upper and lower bounds, as well as conjectures about the existence of solutions and the existence of polynomial-time algorithms to find them. Finally, I will discuss generalizations of this model, and how these insights shed light on the optimization landscape of overparametrized neural nets. Based on joint work with Eliran Subag.
Andrea Montanari