Stephen J. Wright

wright

Stephen J. Wright

University of Wisconsin-Madison, USA

Optimization in Data Analysis

Date: Wednesday, July 6, 2016 - 10:30-12:00

Venue: Building CW, ground floor, Aula

Stephen J. Wright University of Wisconsin-Madison, USA
Stephen J. Wright is the Amar and Balinder Sohi Professor of Computer Sciences at the University of Wisconsin-Madison. His research is on computational optimization and its applications to many areas of science and engineering. Prior to joining UW-Madison in 2001, Wright was a Senior Computer Scientist at Argonne National Laboratory (1990-2001), and a Professor of Computer Science at the University of Chicago (2000-2001). He has served as Chair of the Mathematical Optimization Society and as a Trustee of the Society for Industrial and Applied Mathematics (SIAM). He is a Fellow of SIAM. In 2014, he won the W.R.G. Baker award from IEEE.

Wright is the author or coauthor of widely used text / reference books in optimization including "Primal Dual Interior-Point Methods" (SIAM, 1997) and "Numerical Optimization" (2nd Edition, Springer, 2006, with J. Nocedal). He has published widely on optimization theory, algorithms, software, and applications.

Wright is editor-in-chief of the SIAM Journal on Optimization and has served as editor-in-chief or associate editor of Mathematical Programming (Series A), Mathematical Programming (Series B), SIAM Review, SIAM Journal on Scientific Computing, and several other journals and book series.

Optimization in Data Analysis
Optimization formulations and algorithms are central to modern data analysis and machine learning. Optimization provides a collection of tools and techniques that can be assembled in different ways to solve problems in these areas. In this tutorial, we survey important problem classes in data analysis and identify common structures in their formulations as optimization problems and common requirements for their solution methodologies. We then discuss key optimization algorithms for tackling these problems, including first-order methods and their accelerated variants, stochastic gradient methods, and coordinate descent methods. We also discuss nonconvex formulations of matrix problems, which has become a popular way to improve tractability of large-scale problems.
VIEW THE PLENARIES