List of tutorial speakers and topics are given below. Each tutorial is accompanied by one or two related talks.

Thursday 9am

  • Haitham Bou Ammar - Machine learning in policy search

Thursday 11am

  • Amir Ali Ahmadi - Optimization over Nonnegative Polynomials: Algorithms and Applications
  • Etienne de Klerk - The Lasserre hierarchy for polynomial optimization

Thursday 1:30pm

  • David Simchi-Levi - Applications of machine learning and optimization in online revenue management

Thursday 4:30pm

  • Ted Ralphs - Multilevel/multistage discrete optimization problems

Friday 9am

  • Javad Lavaei - Nonlinear programming challenges in the optimal power flow problem

Friday 1pm

  • Chaitanya Bandi - Robust Optimization methods in Stochastic Analysis

Friday 4:30pm

  • Elad Hazan - Online Convex Optimization

Saturday 9am

  • Uday Shanbhag - Optimization with equilibrium constraints


  • Title: Machine learning in policy search

  • Speaker: Haitham Bou Ammar

Abstract: Policy search approaches represent a promising method for solving sequential decision making problems. In the policy search setting, we assume that we are given a class of policies, parameterized by an unknown vector of parameters, mapping from the states to the actions. The goal is to determine the optimal parameters that maximize the total expected reward. A common problem with policy search is that the search through the policies can be difficult and computationally expensive. This problem is further manifested when including complex policy forms (e.g., when adding forecasts to the state space to facilitate learning) typical for real-world applications.

In this tutorial we will target the above framework and shed-the-light on a new method for leveraging some of the problems inherent to policy search. First, a novel method for incorporating forecasts into the state variables is introduced. This method considers complex policy forms represented by neural networks. The neural network weight connections are then trained upon environmental interactions to maximize the total expected return.

Though successful in some settings, it becomes clear that complex policy forms are hard to train due to a variety of problems including stochasticity, non-stationarity, and learning from scratch. Here, we focus on the third problem, where instead of learning policies from scratch, as in standard methods, a novel method based on lifelong machine learning is adopted. In lifelong learning, the agent has to operate within an online multi-task learning setting, where it is required to learn a series of sequential decision making problems over its lifetime. The agent will learn the tasks consecutively, acquiring multiple trajectories within each task before moving to the next. The tasks may be interleaved, giving the agent the opportunity to revisit earlier tasks for further experience, but the agent has no control over the task order. To reduce computational and sample complexities, we present a method which supports transfer by assuming the existence of shared knowledge repositories between tasks which are learnt (online) upon trajectory observations. These repositories are fixed across tasks thus capturing a non-stationary shared structure which is then tailored using task-specific projections to suit the peculiarities of each problem domain.

In a generalization of this previous method, we finally show that such transferred knowledge can arrive from highly dissimilar sources. Namely, we present a framework for lifelong transfer in policy search where knowledge arrives from tasks belonging to different domains (i.e., exhibiting varying state and action representations).

We present successful results of these methods in a set of experiments varying from simulated battery storage problems to real-world robotics including an application of controlling a quadcopter unmanned aerial vehicle.

Bio: Haitham Bou Ammar is a post-doctoral researcher in the Operational Research and Financial Engineering (ORFE) Department at Princeton University. Prior to joining Princeton, he was a post-doctoral researcher in the Department of Computer and Information Science at the University of Pennsylvania, and a member of the GRASP (General Robotics Automation, Sensing, Perception) lab. His primary research interests lie in the fields of machine learning, artificial intelligence, data mining, optimization, and control theory with applications to robotics and medicine. In particular, his research focuses on developing versatile systems that can learn multiple tasks over a lifetime of experience in complex environments, and transfer knowledge to rapidly acquire new capabilities.

Haitham received his Ph.D. in Artificial Intelligence from Maastricht University in the Netherlands, with a thesis focusing on transfer for reinforcement learning tasks. His dissertation developed methods for autonomous transfer between control systems for efficient and effective learning.

  • Title: Optimization over Nonnegative Polynomials: Algorithms and Applications

  • Speaker: Amir Ali Ahmadi, Department of Operations Research and FInancial Engineering, Princeton University

Abstract: This tutorial is about a problem that is very easy to describe: How to impose conditions on the coefficients of a multivariate polynomial so that it becomes nonneagtive on a set defined by polynomial equations and inequalities? This problem has surprisingly many applications in applied and computational mathematics including polynomial optimization, combinatorial optimization, Lyapunov analysis of dynamical systems, regression and classification in machine learning, formal verification of safety-critical systems, and automated theorem proving in geometry, among many others. I will explain how nonnegative polynomials arise in all these applications and then focus on the computational complexity of the problem and some of the algorithms we currently have for approaching it. In particular, I will cover the theory of sum of squares polynomials and its relation to (i) classical Positivstellensatz results in algebraic geometry, and (ii) modern computational tools involving semidefinite optimization (as shown by Parrilo and others). This tutorial will be followed by a tutorial by Etienne de Klerk who will speak about the Lasserre hierarchy and it connection to moment problems. This will be "dual" (in the convex analysis sense) to my presentation.

Time permitting, I will also touch upon my recent work with Anirudha Majumdar to make the algorithms in this field more scalable by moving from semidefinite programming to linear or second order cone programming. There will be a follow-up session with talks by Anirudha Majumdar and Georgina Hall who will speak further about this direction of research.

The theory of nonnegative polynomials has recently found applications also in robotics, and this will be touched upon in the plenary talk by Russ Tedrake.

Bio: Amir Ali Ahmadi ( http://aaa.princeton.edu/ ) is an Assistant Professor at the Department of Operations Research and Financial Engineering at Princeton University and an Associated Faculty member of the Department of Computer Science. Amir Ali received his PhD in EECS from MIT and was a Goldstine Fellow at the IBM Watson Research Center prior to joining Princeton. His research interests are in optimization theory and in computational aspects of dynamics and control. Amir Ali's recent awards include the INFORMS Computing Society Prize, the NSF Career Award, the AFOSR Young Investigator Program Award, the Goldstine Fellowship of IBM Research, the teaching award of Princeton's Engineering Council, the best conference paper award of the IEEE International Conference on Robotics and Automation, a best paper award in SIAM Journal on Control and Optimization (2013-2015), the NSF Oberwolfach Fellowship, and a Google Faculty Research Award.

  • Title: The Lasserre hierarchy for polynomial optimization

Speaker: Etienne de Klerk, Tilburg University

Abstract: We consider global optimization problems involving polynomials only, and review the recent methodology, due to Lasserre, for computing the optimum through a hierarchy of semidefinite programming problems. Topics covered will include: software, known error bounds (also for some combinatorial optimization problems), and applications.

Bio: Etienne de Klerk obtained his PhD degree from the Delft University of Technology in The Netherlands in 1997. From January 1998 to September 2003, he held assistant professorships at the Delft University of Technology, and from September 2003 to September 2005 an associate professorship at the University of Waterloo, Canada. Since September 1st 2004 he has worked at Tilburg University, The Netherlands, first as an associate professor, and then as full professor (since June 2009). From August 29th, 2012 to August 31st, 2014, he also worked as full professor in the Division of Mathematics of the School of Physical and Mathematical Sciences at the Nanyang Technological University in Singapore, while on leave from Tilburg University. As of September 2015, he also holds a part-time (0.2fte) position as full professor at the Delft University of Technology in The Netherlands. Dr. De Klerk's main research interest is mathematical optimization, and his publications include the monograph "Aspects of Semidefinite Programming" (Kluwer Academic Publishers, 2002). He is currently an associate editor of the SIAM Journal on Optimization and has served two terms as associate editor at the INFORMS Journal Operations Research. He is a co-recipient of the Canadian Foundation for Innovation's New Opportunities Fund award, and a recipient of the VIDI award of the Dutch Organization for Scientific Research.

  • Title: Applications of Machine Learning and Optimization in Online Revenue Management

Speaker: David Simchi-Levi, Professor of Engineering Systems, MIT

Abstract: In a dynamic pricing problem where the demand function is unknown a priori, price experimentation can be used for demand learning. In practice, however, online sellers are faced with a few business constraints, including the inability to conduct extensive experimentation, limited inventory and high demand uncertainty. In this talk we discuss models and algorithms that combine machine learning and price optimization that significantly improve revenue. Specifically, we start by considering a dynamic pricing model where the demand function is unknown but belongs to a known finite set. The seller is allowed to make limited number of price changes during a finite time horizon. The objective is to minimize the regret, i.e. the expected total revenue loss compared to a clairvoyant who knows the demand distribution in advance. We demonstrate a pricing policy that incurs the smallest possible regret, up to a constant factor. In the second part of the presentation we extend the model to a network revenue management problem where an online retailer aims to maximize revenue from multiple products with limited inventory. As common in practice, the retailer does not know the expected demand at each price and must learn the demand information from sales data. We propose an efficient and effective dynamic pricing algorithm, which builds upon the Thompson sampling algorithm used for multi-armed bandit problems by incorporating inventory constraints into the pricing decisions. The algorithm proves to have both strong theoretical performance guarantees as well as promising numerical performance results when compared to other algorithms developed for the same setting. Throughout the presentation, we report results from live implementations at companies such as Rue La La, Groupon and a large European Airline carrier.

About the Speaker: David Simchi-Levi is a Professor of Engineering Systems at MIT and Chairman of OPS Rules, an operations analytics consulting company and Opalytics, a cloud analytics platform. He is considered one of the premier thought leaders in supply chain management and business analytics. His research focuses on developing and implementing robust and efficient techniques for operations management. He has published widely in professional journals on both practical and theoretical aspects of supply chain and revenue management. His Ph.D. students have accepted faculty positions in leading academic institutes including U. of California Berkeley, Columbia U., Cornell U., Duke U., Georgia Tech, Harvard U., U. of Illinois Urbana-Champaign, U. of Michigan, Purdue U. and Virginia Tech. Professor Simchi-Levi co-authored the books Managing the Supply Chain (McGraw-Hill, 2004), the award winning Designing and Managing the Supply Chain (McGraw-Hill, 2007) and The Logic of Logistics (3rd edition, Springer 2013). He also published Operations Rules: Delivering Customer Value through Flexible Operations (MIT Press, 2011). He served as the Editor-in-Chief for Operations Research (2006-2012), the flagship journal of INFORMS and for Naval Research Logistics (2003-2005). He is an INFORMS Fellow, MSOM Distinguished Fellow and the recipient of the 2014 INFORMS Daniel H. Wagner Prize for Excellence in Operations Research Practice; 2014 INFORMS Revenue Management and Pricing Section Practice Award; 2009 INFORMS Revenue Management and Pricing Section Prize and Ford 2015 Engineering Excellence Award.Professor Simchi-Levi has consulted and collaborated extensively with private and public organizations. He was the founder of LogicTools which provided software solutions and professional services for supply chain optimization. LogicTools became part of IBM in 2009.

  • Title: - Multilevel/multistage discrete optimization problems

Speaker: Ted Ralphs

Abstract: In a traditional optimization modeling framework, the class of optimization problems one can consider is restricted to those in which a fixed decision must be made by a single decision-maker (DM) at a fixed point in time with a single objective. Such models assume implicitly that the DM has deterministic knowledge of all inputs to the model and that there is no possibility for "recourse" to counterbalance the possible adverse effects of unpredicted future events on the effectiveness of the determined strategy. There are a wide variety of generalizations of this basic framework that account for additional features present in many real-world optimization problems. Multilevel/multistage optimization is a framework that allows both for multiple (possibly competing) DMs and for decisions made at at multiple points in time. The framework subsumes both game theoretic models, in which multiple DMs with competing objectives make decisions sequentially, and recourse models, in which a single DM must make a sequence of decisions over time in order to react to changing conditions. The uncertainty resulting from a competitive decision-making environment and the uncertainty arising from exogenous sources of stochasticity can be handled in a unified way, both from the perspective of modeling and from the perspective of algorithm development.

In the first part of this tutorial session, we present a general framework for modeling and solution of this broad class of problems. The overview will consist primarily of a high-level survey of solution methodology, but the focus will not be on details of particular algorithms. Rather, we focus on development of a conceptual framework within which one can understand the theoretical basis for existing solution approaches in terms of the ingredients necessary for development of convergent algorithms.

In the second part of the session, we'll introduce a software framework implementing the algorithmic components introduced in the first part. In particular, we'll describe details of two software packages: SYMPHONY and MibS. MibS is an overarching framework for solution of bilevel integer linear programs. SYMPHONY is an MILP solver that has been extended to include some unique capabilities that allow it to be used effectively as a sub-solver within MibS, including warm-starting, exporting of dual functions, and solution of continuous bilevel integer linear programs. Computational results will be presented.

Bio: Ted Ralphs received his Ph.D. in Operations Research from Cornell University in 1995. He is currently a professor in the Department of Industrial and Systems Engineering (ISE) at Lehigh University, where he is director of the Laboratory for Computational Optimization Research at Lehigh (COR@L) and chair of Lehigh's Research Computing Steering Committee. He is a co-founder of the COIN-OR Foundation, a non-profit foundation promoting the development of open source software for operations research and is currently chair of the Technical Leadership Council and a member of the Strategic Leadership Board, as well as project manager of a number of projects hosted in the COIN-OR open source software repository. His research interests include development of methodology for solving discrete optimization problems, including those with multiple levels or multiple objectives; development of parallel search algorithms; development of open source software; and applications of discrete optimization.

  • Title: Near-Global Solutions of Nonlinear Mixed-Integer Power Optimization Problems: Theory, Numerical Algorithm, and Case Studies

Speaker: Javad Lavaei

Abstract: The planning and operation of power networks depend heavily on various large-scale optimization problems solved from every few minutes to every several months. Some of these power optimization problems are: state estimation, optimal power flow (OPF), security-constrained OPF, unit commitment, and network reconfiguration. It is very challenging to solve these problems efficiently due to the nonlinearities imposed by two different sources: (i) discrete variables such as the ratio of a tap-changing transformer, the on/off status of a line switch, or the commitment parameter of a generator, (ii) laws of physics. Since 1962, the nonlinearity of the OPF-based problems has been studied extensively, and various heuristic and local-search algorithms have been proposed. Recently, there has been a growing interest in using semidefinite programming (SDP) relaxations for finding global or near-global solutions of large-scale non-convex power problems. In this talk, we present several results on this topic.

The existence of a rank-1 matrix solution to the SDP relaxation of a given nonlinear mixed-integer power optimization problem enables the recovery of a global solution of the original problem. We prove that the relaxation is exact for a large class of problems due to the physics of power grids. In the case where the relaxation is inexact, we show that since real-world power networks have low treewidth, the SDP relaxation always has a low-rank solution. By leveraging this property, we design a penalized SDP relaxation to reduce the rank to 1, leading to a near-global solution. As a case study, we demonstrate that the proposed relaxation is able to find feasible solutions for the optimal power flow problem for various IEEE and Polish test systems with a global optimality guarantee of at least 99%. Moreover, we propose a computationally-cheap distributed algorithm to solve the penalized SDP relaxation at scale. Unlike the existing second-order methods that cannot handle large SDP problems, this approach leverages the low treewidth of power grids and is able to solve problems with over 30 million parameters in less than 30 minutes on a modest computing platform. As an example, we demonstrate that solving the SDP relaxation of OPF over the NY Grid using the proposed algorithm requires only a series of eigenvalue decompositions over small matrices of size at most 40, which runs very fast.

  • Title : Stochastic Analysis and Stochastic Optimization via Robust Optimization

Speaker: Chaithanya Bandi

Abstract: To understand and optimize the performance of a system under uncertainty, the traditional approach consists of positing a probability distribution over the uncertain parameters, and derive information on the system performance and optimize it. When the system’s dynamics are complex, computing the above expression analytically becomes challenging, and simulation becomes the only resort. However, simulation models may take a considerable amount of time for the results to be statistically significant and can often be complex, making it difficult to understand and further optimize key qualitative insights. Motivated by these challenges, we propose a robust optimization approach to analyzing and optimizing the expected performance of stochastic systems characterized by linear dynamics, based on a robust optimization framework. Our approach consists of the following steps: (1) We model uncertainty via polyhedral sets inspired by the limit laws of probability. The size of these sets is characterized by few variability parameters, which controls the degree of conservatism of the model. (2) We then cast the performance analysis problem as a optimization problem, for instance maximizing a given performance measure subject to the system’s dynamics and the parametric uncertainty sets. The output of this optimization can be thought of as a worst case performance measure function of the variability parameters. (3) Instead of computing the expected performance of the system via averaging simulation outputs, we assume the variability parameters follow limiting distributions (derivatives of the normal distribution for light-tailed systems and the stable distribution for heavy-tailed systems) and average the worst case outputs from the optimization over a discretized space over the variability parameters. (4) To optimize the stochastic system, we propose to cast the problem of finding the optimal design input via a robust optimization problem, e.g., minimizing the average or conditional value-at-risk of the worst case performance outputs. This framework allows a significant reduction in the dimension of uncertainty which provides grounds for tractable analyses. We illustrate the tractability and accuracy of our approach by (a) simulating the transient behavior of multi-server feedforward queueing networks, and (b) determining optimal base stock policies of generalized supply chain networks. This is joint work with Dimitris Bertsimas and Nataly Youssef.

Bio: Chaithanya Bandi joined the department of Managerial Economics and Decision Sciences at the Kellogg School of Management in 2013. He received a Ph.D in Operations Research from MIT, and a Bachelors Computer Science from IIT Madras. He has worked in technology and Financial services companies such as Google, Yahoo, Bell Labs, Lehman Brothers, and Investment Technology Group. Professor Bandi is broadly interested in the problems of decision making under uncertainty and incomplete information with applications to operations management. In particular, he has focussed on developing Robust Optimization based models to formulate key problems in applications such as queueing control, risk optimization in portfolios, mechanism design, and online algorithms.

  • Title : Online convex optimization

Speaker: Elad Hazan

We describe the recent framework of online convex optimization which naturally merges optimization and regret minimization in games. We survey the basic algorithms and tools at the heart of this framework, which have led to the resolution of fundamental questions in online learning.

  • Title: Equilibrium Problems in Power Markets: Models, Analysis and Computation

Speaker: Uday Shanbhag

Abstract: Given the sheer breadth of the field of power markets, there have been several past efforts at reviewing the literature. Yet, many gaps persist in both the reviews as well as in the related literature. Primary amongst these is a lack of a comprehensive treatment of equilibrium models in power markets characterized by the absence of the following: (i) the ability to capture a range of competitive interactions (perfectly competitive, Nash-Cournot, and its variants) via a unified framework; (ii) the capacity to cope with uncertain parameters and risk-based formulations; and (iii) the ability to incorporate hierarchy, a consequence of the multitude of settlements that dene most power markets. Motivated by these lacunae, in this tutorial, via complementarity theory, equilibrium programming, and stochastic optimization, we intend to provide the analytical and algorithmic foundations for contending with challenges arising in (i), (ii), and (iii).

Bio: Since fall 2012, Uday V. Shanbhag has been an associate professor in the department of Industrial and Manufacturing Engineering (IME) at the Pennsylvania State University. Prior to arriving at Penn. State, from 2006–2012, he was first an assistant professor, and subsequently an associate professor, in the department of Industrial and Enterprise Systems engineering (ISE) at the University of Illinois at Urbana-Champaign. His research interests lie in the development of analytical tools and scalable computational schemes for optimization and equilibrium problems,with a focus on addressing uncertainty, nonsmoothness and nonconvexity. His research honors include the triennial A.W. Tucker Prize by the mathematical programming society (MPS) in 2006, the NCSA Faculty Fellowship in 2006, the Computational Optimization and Applications (COAP) best paper award (with advisor Walter Murray) in 2007, and the best theoretical paper award in the Winter Simulation Conference (WSC) in 2013 (with Angelia Nedi'c and Farzad Yousefian). Additionally, he was a finalist for the Microsoft Faculty fellowship (2008) and was awarded the National Science Foundation (NSF) Faculty Early Career Development (CAREER) award in 2012 from the Operations Research program. Several of his doctoral students have been recognized by a range of honors including Uma Ravat (best student paper prize at the triennial International Conference on Stochastic Programming (2010)), Aswin Kannan (finalist for best student paper prize at the triennial International Conference on Stochastic Programming (2010)) and Huibing Yin (finalist for the best student paper prize at the IEEE Conference on Decision and Control (2009)). Finally, his teaching has been recognized through annual teaching awards by ISE in 2008 and 2011.His service-related activities include being an associate editor for the Allerton Conference on Computing, Communication, and Control, program committee member on Conference on High Confidence Networked Systems (HiCoNS) (2012, 2013), Vice-chair (Nonlinear Programming) in the Informs Optimization society (2009, 2010), and the cluster chair for Nonlinear programming in Informs (2009, 2010). Additionally, he was on the advisory committees for Informs Optimization conference in Gainesville, Fl. (2010), the Midwest Informs Conference (2011), and on the Nicholson paper prize committee in 2013. He was a co-chair of the Nicholson paper prize committee (with Kavita Ramanan) in 2014. Uday V. Shanbhag has a Ph.D. from Stanford University's department of Management Science and Engineering (2006), with a concentration in operations research and was associated with the Systems Optimization Laboratory when at Stanford. He also holds masters and undergraduate degrees from the Massachusetts Institute of Technology (MIT), Cambridge (in Operations Research) and the Indian Institute of Technology (IIT), Bombay, respectively.