Plenary Speakers

We are pleased to have four exciting plenary speakers spanning exciting areas of application (energy and robotics), as well as major methodological themes (learning in stochastic optimization, and optimization in machine learning):

  • Richard O'Neill - Federal Energy Regulatory Commission (FERC) - Optimization challenges in electric power markets
  • Russ Tedrake - Massachusetts Institute of Technology - Optimization in the emerging world of robotics
  • Peter Frazier - Cornell University - Stochastic Optimization in the Tech Sector: Yelp and Uber
  • Han Liu - Princeton University - Optimization challenges in statistics and machine learning

Abstracts:

  • Russ Tedrake - Exploiting structure for robust model-based optimization in robotics

I find that the optimization community often assumes that the algorithms powering the robots seen on youtube are already using all of the best optimization tools available. In almost every case, they are not (yet!). For many of the most impressive robots, simple, intuitive, hand-tuned control systems still rule the day, despite requiring massive amounts of man-power and testing to get right. Optimization-based controllers still fail to match their simplicity and robustness (e.g. to sensing and model errors) in the real world.

This trend is starting to change. For example, the last few years have seen optimization methods emerge as a popular tool for whole-body planning and control on humanoid robots. Quadratic-programming approaches to whole-body control generalize traditional Jacobian-based approaches by supporting hard inequality constraints due to control and friction limits. Online center of mass / zero-moment point planning algorithms can be similarly generalized. Thanks to fast computers and advanced solvers, these optimizations can now be solved inside a high-rate feedback loop. More than technological, this represents a cultural change with more of the field looking towards optimization for more general solutions.

We need your help. In some cases, we are still not posing the right questions. In other cases, we need more work to exploit the structure in the problems to make the problems we want to formulate tractable. This structure includes the famous structure of the floating-base Lagrangian, sparsity in the (inverse) dynamics, the algebraic structure of the kinematics, dynamics, and constraints, and the inherent combinatorial structure imposed by interacting with the environment by making and breaking contact. I will describe my group’s initial work towards exploiting this structure in our algorithms, and our detailed approach to applying these tools to humanoid robots and fast-flying UAVs.

Bio is here: http://groups.csail.mit.edu/locomotion/russt.html

  • Peter Frazier - Stochastic Optimization in the Tech Sector: Yelp and Uber

We discuss two stochastic optimization problems from the speaker's work in the tech sector. First, we discuss Bayesian optimization of expensive functions at Yelp, with application to parameter choice in content recommendation algorithms, and to the design of websites and mobile apps. This work uses machine learning to model the objective function, and views the problem of choosing those points to sample next as its own stochastic optimization problem. Second, we discuss dynamic pricing of transportation ("surge pricing") at Uber, focusing on the rationale for dynamic pricing viewed as a problem in stochastic optimization.

  • Han Liu - Princeton University - Optimization challenges in statistics and machine learning

Optimization lies at the heart of machine learning. In this talk, I will use the graphical model learning problem to illustrate several representative optimization challenges in statistics and machine learning. These challenges range from high dimensionality, nonconvexity, to massive sample size. In addition to presenting these challenges, I will focus on explaining the origins of these challenges and show that they come from very natural formulation of the learning problems. We will also introduce several new research directions lying at the intersection between optimization and machine learning.