We are all aware of the need to handle higher levels of uncertainty when scheduling generation on the power grid (the unit commitment problem). This is becoming widely known as the "stochastic unit commitment problem," but sadly, it has also been equated with a particular type of policy for handling uncertainty known as multistage stochastic programming using scenario trees.
For a tutorial on stochastic optimization, using energy (and the stochastic unit commitment problem) as a context, click here.
I am going to make the following points (click on any one for more information lower on the page):
Please feel free to email comments/questions to Warren Powell <powell@princeton.edu>. I am happy to post informative comments (including dissents) at the bottom of this page.
Tutorial on stochastic optimization and stochastic unit commitment
Tutorial: Stochastic Optimization in Energy - Tutorial given to the Federal Energy Regulatory Comission, August 6, 2014 (powerpoint format - 20meg, pdf format - 8meg)
This is a lengthy (200 slide) tutorial that is designed to be read (different than my usual style). It uses the contextual domain of energy systems (especially stochastic unit commitment) to provide a broad introduction to stochastic optimization. Topics include:
When I attend energy workshops featuring research on unit commitment and uncertainty, I walk away feeling as if there is only one way to handle uncertainty which is to formulate "stochastic programs" with "scenario trees". In a nutshell, instead of solving one optimization model over some horizon with a single point forecast of the future, scenario trees requiring solving a dramatically larger optimization model with different samples of what might happen in the future (represented using "scenarios").
This is simply not true. What the stochastic programming community is promoting is a particular policy known as a "lookahead policy" using a stochastic "lookahead model." However, this is only one of four fundamental classes of policies that can be used to solve sequential decision problems. The four classes are:
A common misconception is that the ISOs (and their vendors such as Alstom) are solving "deterministic" models, with the implication that these will not work in the presence of high levels of uncertainty. But no ISO uses a pure deterministic model - all of them use modified deterministic models that represent a form of cost function approximation. The critical difference is the reserves. You would never schedule reserve capacity in a true deterministic model. The only reason to schedule reserve is to handle uncertainty. This is the reason why including reserve requirements makes it a cost function approximation.
Models, lookahead models, base models and policies
The discussions about "deterministic models" and "stochastic models" are ignoring the critical distinction between lookahead models and what I am calling base models which represent our best approximation of the real world.
The ISOs all use some form of lookahead model - this is an approximation of what might happen in the future, which is solved to determine what to do now (more precisely, what commitments to make now).
The lookahead model is just a form of policy to solve the base model, which is typically implemented as a simulator. Getting an optimal solution to the lookahead model does not tell you anything about how well the policy will work. As a general rule, there are five types of approximations made in a lookahead model:
A major issue in this community is that too much time is being spent on solving the lookahead model rather than calibrating the base model. Getting a good base model *really matters*!!! If you do not have an accurate base model, how can you know if your policy would work in a realistic setting? We can all make a policy look good by suitably designing our base model.
Remember: the problem you are solving is not the lookahead model - it is the base model, which is our approximation of the real world. This is how you tune any parameters such as the amount of reserve, or the number of scenarios in your scenario tree (see below).
There are two more terms that need to be understood:
Stochastic programming and scenario trees
Stochastic programming is a method that looks to make "robust" decisions now given a range of "scenarios" that might happen in the future. The idea is illustrated to the right: you start by making decisions to schedule steam generation tomorrow, then you observe the wind (and other random events) for all day tomorrow, and then you make the decisions (scheduling generators) that would be made after we see the information (e.g. the actual wind).
This methodology struggles from several major limitations when applied to the stochastic unit commitment problem:
We have never seen a careful analysis of the actual decisions produced by a stochastic unit commitment model. In particular, true testing requires running the policy against many hundreds of possible sample paths. PJM has a policy of one outage in 10 years - it will take a lot of scenarios, and then a large number of simulations, to ensure that level of reliability.
In short, while we believe that two-stage stochastic programming can be a powerful algorithmic methodology for many problems, we think that the unit commitment problem possesses several characteristics that make it a bad application domain for stochastic programming.
Robust cost function approximations
We all know that we can make a decision by trying to forecast the future, pretend that this will come true, and solve the resulting deterministic model. For our unit commitment problem, we might write this as
Here, x_t might be decisions about steam generation (which we make today), while y_t represents decisions about gas turbines which we include in our optimization, but these decisions will not be finalized until tomorrow. Such a model would never work in practice, because we know that loads might be higher than we expect, or the energy from wind might be lower than we expect.
For this reason, all ISOs solve a modified version of this problem that looks approximately like
Here, we have modified our original cost function to include two constraints that enforce a specified level of up and down ramping reserves. The only reason to include these constraints is to accommodate future uncertainties. We have approximated our cost function (by adding these constraints) and now we have to find the right values for the theta parameters so that we get the right amounts of reserve.
How do we choose these parameters? In our work, we are using a carefully calibrated simulator, called SMART-ISO, which serves as the base model. In plain English, we tune these parameters by iteratively running a simulator. Mathematically, this is written as
Here, we are minimizing over policies (which means minimizing over the reserve parameter theta). Instead of the expectation (which we cannot compute), we average over a series of simulations (or perhaps take the worst case). The sum refers to summing costs, where we simulate the evolution of the state of the system using standard simulation methods.
Robust cost function approximations are widely used in engineering practice, and have not received the attention (and respect) they deserve. Considerable knowledge and insight has been invested in the design of these CFAs. Finding a good CFA is similar to fitting a parametric function through a set of points. The only difference is that we are looking for a function to minimize, and then "fitting" it in a base model to produce the lowest costs.
It is easy to see how a robust CFA works so well for the stochastic unit commitment problem. If we carefully design a set of scenarios, we should get the behavior of scheduling reserve capacity, but we encounter the problem that the scenario tree does not guarantee that we get reserve capacity at all points in time, spread over different regions of the network. With a robust CFA, we can guarantee this.
We note that the ISOs (based on our experience) do not have stochastic simulators to tune their models. Instead, the ISOs use something even better - the real world! They design their policies in an online fashion, which avoids needing to create computer-based models. The only weakness is that they are unable to use this approach to design policies for high penetrations of wind, since this has not actually happened yet.
Illustration using offshore wind study
We have run a large number of simulations of SMART-ISO on a study of off-shore wind. SMART-ISO has been under development for four years at Princeton University where we have focused primarily on calibrating it against PJM. We then created a stochastic model of off-shore wind, using samples derived from actual onshore wind and wind forecasts. These samples have been shown to accurately reproduce the forecast errors. These errors were added to actual meteorological forecasts of offshore wind conditions, produced using the WRF meteorological model.
Below are a series of samples generated from the stochastic model for a single forecast (the black line), illustrating the type of variability that we are reproducing.
We then created a total of 84 sample paths spanning four months (January/April/July/October), with three WRF forecasts per month (which produced a wide range of meteorological conditions) and 7 samples per forecast (as depicted above).
The simulations were run at 5 buildout levels, ranging from 8 GW up to 70 GW of wind generating capacity.
For each buildout level, the reserve parameters for SMART-ISO were recalibrated. The slide below illustrates the ramping reserves for each of the five buildout levels for January. These reserves are the smallest that would produce a run with no outages. We also show the reserves required if we used a perfect forecast.
By tuning the reserve levels, we could get clean runs (no outages at all during an entire week) for all 84 sample paths (over the four months) for the first two buildout levels (up to 25 GW of wind generating capacity). Note that the base PJM reserve is 1300MW, so the reserve levels above are much higher (these are all in the form of spinning reserve from gas turbines).
At buildout level 3 (40 GW of capacity) we encountered a single outage for a single sample path in July (see the maroon bar below). The blue bars are the number of outages with the base PJM reserve levels (no problem at buildout 0, but problems were found everywhere else, for all months). There were also never any problems with perfect information.
These experiments are showing that by doing nothing but tuning the current PJM reserve policy, we can handle 28 GW of wind, and we suspect that with some adjustments, we can get that to 40 GW. Note that we are accomplishing this while still lowering LMPs.
This is a preliminary study, but it shows that we can handle high levels of wind, using realistic, highly volatile sample paths for wind.