Stochastic unit commitment

We are all aware of the need to handle higher levels of uncertainty when scheduling generation on the power grid (the unit commitment problem). This is becoming widely known as the "stochastic unit commitment problem," but sadly, it has also been equated with a particular type of policy for handling uncertainty known as multistage stochastic programming using scenario trees.

For a tutorial on stochastic optimization, using energy (and the stochastic unit commitment problem) as a context, click here.

I am going to make the following points (click on any one for more information lower on the page):

Tutorial on stochastic optimization and stochastic unit commitment

Tutorial: Stochastic Optimization in Energy - Tutorial given to the Federal Energy Regulatory Comission, August 6, 2014 (powerpoint format - 20meg, pdf format - 8meg)

This is a lengthy (200 slide) tutorial that is designed to be read (different than my usual style). It uses the contextual domain of energy systems (especially stochastic unit commitment) to provide a broad introduction to stochastic optimization. Topics include:

• Opening slides highlighting the uncertainty of wind and solar energy
• Stochastic - What does it mean, and why is it important?
• Modeling stochastic, dynamic problems (five fundamental elements of any sequential decision problem)
• The four classes of policies:
• Policy function approximations
• Robust cost function approximations
• Policies based on value function approximations
• How to identify the right class of policy
• The fields of stochastic optimization - Brief summary highlighting that "fields" such as stochastic programming, dynamic programming and robust optimization are actually classes of policy.
• The stochastic unit commitment problem - This has become a popular area of research for stochastic programming. These slides introduce the problem, and highlight what appear to be major weaknesses in the use of scenarios for producing robust policies in high dimensional problems.
• The robust cost function approximation - These slides formalize standard industry practice, which is to create a parametric approximation for the cost function that is tuned in a stochastic base model.
• Offshore wind project - Here we show that robust CFAs do a great job of providing robustness in studies of high penetrations of renewables.
• Perspectives on robust policies - Here we contrast parametric and nonparametric modeling strategies. We then characterize robust CFAs (basically, industry practice) as a parametric cost function approximation, while scenario trees are a nonparametric model of the information process.

The four classes of policies

When I attend energy workshops featuring research on unit commitment and uncertainty, I walk away feeling as if there is only one way to handle uncertainty which is to formulate "stochastic programs" with "scenario trees". In a nutshell, instead of solving one optimization model over some horizon with a single point forecast of the future, scenario trees requiring solving a dramatically larger optimization model with different samples of what might happen in the future (represented using "scenarios").

This is simply not true. What the stochastic programming community is promoting is a particular policy known as a "lookahead policy" using a stochastic "lookahead model." However, this is only one of four fundamental classes of policies that can be used to solve sequential decision problems. The four classes are:

• Policy function approximations (PFAs) - This is any rule or function that, given a state (what we know now) returns an action. Examples include "pump water uphill between 10pm and 2am, release water downhill between noon and 4pm" or "charge the battery when the LMP is below \$50/MWhr, and discharge the battery when LMPs are above \$70/MWhr." A PFA can be a lookup table, a statistical model, or a neural network.
• Robust cost function approximations (CFAs) - A CFA is when you minimize some approximate cost model tuned to produce good performance over time in a simulator. An example relevant to this discussion is minimizing generation costs over 48 hours (or perhaps 6 hours) where additional constraints have been added to force spinning reserve into the solution. The amount of spinning reserve has to be tuned to accommodate what might happen in the future.
• Policies based on value function approximations (VFAs) - Imagine that you have a certain amount of water in a reservoir, and you generate income when you release this to create energy, but this reduces the amount of water in the reservoir an hour from now. We might use a VFA to approximate the value of putting the system in this state. VFAs are most commonly associated with dynamic programming.
• Lookahead policies - This is where you optimize over some horizon to make a better decision now. These come in two broad flavors:
• Deterministic lookahead policies - These use a point forecast of what is going to happen in the future to make a decision now. The deterministic model being solved is called the "lookahead model."
• Stochastic lookahead policies - These use a stochastic lookahead model which represents that multiple outcomes ("scenarios") might happen. Stochastic lookahead models are typically much harder to solve. If my deterministic lookahead model has 10,000 variables, then a stochastic lookahead model with 20 scenarios might have 200,000 variables. This is why these problems are of such intense interest to national labs with their skills in high performance computing.

A common misconception is that the ISOs (and their vendors such as Alstom) are solving "deterministic" models, with the implication that these will not work in the presence of high levels of uncertainty. But no ISO uses a pure deterministic model - all of them use modified deterministic models that represent a form of cost function approximation. The critical difference is the reserves. You would never schedule reserve capacity in a true deterministic model. The only reason to schedule reserve is to handle uncertainty. This is the reason why including reserve requirements makes it a cost function approximation.

Models, lookahead models, base models and policies

The discussions about "deterministic models" and "stochastic models" are ignoring the critical distinction between lookahead models and what I am calling base models which represent our best approximation of the real world.

The ISOs all use some form of lookahead model - this is an approximation of what might happen in the future, which is solved to determine what to do now (more precisely, what commitments to make now).

The lookahead model is just a form of policy to solve the base model, which is typically implemented as a simulator. Getting an optimal solution to the lookahead model does not tell you anything about how well the policy will work. As a general rule, there are five types of approximations made in a lookahead model:

• The planning horizon - We cannot plan out 7 days, so we plan out 2 days (in our day-ahead unit commitment). When solving same day problems, PJM looks out 2 hours (instead of 24).
• Number of stages - A "stage" represents a sequence of new information followed by a new decision. An ISO can make adjustment decisions every 5 minutes (at PJM). Same-day ("real time") unit commitment decisions might be updated every 15 minutes or so. A standard approximation when solving day-ahead unit commitment is to use a two-stage approximation: schedule steam generators, then assume that you see everything that will happen tomorrow (e.g. the entire wind trajectory all day tomorrow, the failure of any generators), and then you make final scheduling decisions for gas turbines (one schedule for each scenario). Handling more than two stages is exceptionally difficult (handling just two stages is hard), so it has become conventional wisdom that this is an acceptable approximation.
• Number of scenarios - We might choose 20 different realizations of what the wind might do. Each of these realizations is called a scenario. If we model a stage as a period of just one hour, a scenario would be the change in the wind from one hour to the next.
• Discretization - PJM discretizes time in hourly increments when doing day-ahead planning, rather than the 5 minute time step which is what actually happens.
• Dimensionality reduction - This is more subtle, but it involves variables (such as forecasts) that are updated in the real world, but not in the lookahead model (a forecast is typically fixed within the lookahead model).

A major issue in this community is that too much time is being spent on solving the lookahead model rather than calibrating the base model. Getting a good base model *really matters*!!! If you do not have an accurate base model, how can you know if your policy would work in a realistic setting? We can all make a policy look good by suitably designing our base model.

Remember: the problem you are solving is not the lookahead model - it is the base model, which is our approximation of the real world. This is how you tune any parameters such as the amount of reserve, or the number of scenarios in your scenario tree (see below).

There are two more terms that need to be understood:

• Scenario - This is a realization of what might happen tomorrow, used only in lookahead models. A scenario can be a single realization of all events tomorrow (e.g. the wind minute by minute), or it can be represented as part of a scenario tree which captures the branching as new information comes in hour by hour.
• Sample path - These are different trajectories that we might follow in a base model (a simulator). It is very important that policies be tested over as many sample paths in a simulator (the base model) as possible.

Stochastic programming and scenario trees Stochastic programming is a method that looks to make "robust" decisions now given a range of "scenarios" that might happen in the future. The idea is illustrated to the right: you start by making decisions to schedule steam generation tomorrow, then you observe the wind (and other random events) for all day tomorrow, and then you make the decisions (scheduling generators) that would be made after we see the information (e.g. the actual wind).

This methodology struggles from several major limitations when applied to the stochastic unit commitment problem:

• Outages occur when there are sudden downward shifts in the wind (especially when the wind is above what was forecasted). If a scenario is created that reproduces one of these shifts, the model will respond by scheduling extra steam generation the day before, but only at the time that it is needed! However, these shifts can occur at any time of day, so the only way to protect against these shifts is to simulate them at every time of day (this is a lot of scenarios!).
• All the stochastic programming models use what is called a "two stage" approximation - schedule steam, see the wind all day tomorrow, and then schedule the gas turbines. But this means that the gas turbines are being managed with perfect information all day tomorrow. We have found that the biggest source of error is in the hour-ahead forecasting. If there is a large downward shift, standard forecasting (which uses "persistence forecasting") assumes the wind is not going to change, producing a large error, with no time to respond.
• The first stage decision (scheduling steam) is very high-dimensional - we have to optimize thousands of scheduling variables using only dozens of scenarios which then have to produce the behavior of scheduling the proper amount of reserve across time, and across different regions of the network.
• These two stage stochastic programs are so large it is not possible to solve them to optimality. It is generally assumed that an optimality gap of, say, two percent, is "good enough." In our work, we have found that a two percent optimality gap can actually hide large errors in decisions (the objective function might look good, but the decisions do not). Operators do not care about objective functions - they care about the actual decisions. We have found that optimalitiy gaps understate the real quality of the solution because there are large constant terms hidden in the objective function.

We have never seen a careful analysis of the actual decisions produced by a stochastic unit commitment model. In particular, true testing requires running the policy against many hundreds of possible sample paths. PJM has a policy of one outage in 10 years - it will take a lot of scenarios, and then a large number of simulations, to ensure that level of reliability.

In short, while we believe that two-stage stochastic programming can be a powerful algorithmic methodology for many problems, we think that the unit commitment problem possesses several characteristics that make it a bad application domain for stochastic programming.

Robust cost function approximations

We all know that we can make a decision by trying to forecast the future, pretend that this will come true, and solve the resulting deterministic model. For our unit commitment problem, we might write this as Here, x_t might be decisions about steam generation (which we make today), while y_t represents decisions about gas turbines which we include in our optimization, but these decisions will not be finalized until tomorrow. Such a model would never work in practice, because we know that loads might be higher than we expect, or the energy from wind might be lower than we expect.

For this reason, all ISOs solve a modified version of this problem that looks approximately like Here, we have modified our original cost function to include two constraints that enforce a specified level of up and down ramping reserves. The only reason to include these constraints is to accommodate future uncertainties. We have approximated our cost function (by adding these constraints) and now we have to find the right values for the theta parameters so that we get the right amounts of reserve.

How do we choose these parameters? In our work, we are using a carefully calibrated simulator, called SMART-ISO, which serves as the base model. In plain English, we tune these parameters by iteratively running a simulator. Mathematically, this is written as Here, we are minimizing over policies (which means minimizing over the reserve parameter theta). Instead of the expectation (which we cannot compute), we average over a series of simulations (or perhaps take the worst case). The sum refers to summing costs, where we simulate the evolution of the state of the system using standard simulation methods.

Robust cost function approximations are widely used in engineering practice, and have not received the attention (and respect) they deserve. Considerable knowledge and insight has been invested in the design of these CFAs. Finding a good CFA is similar to fitting a parametric function through a set of points. The only difference is that we are looking for a function to minimize, and then "fitting" it in a base model to produce the lowest costs.

It is easy to see how a robust CFA works so well for the stochastic unit commitment problem. If we carefully design a set of scenarios, we should get the behavior of scheduling reserve capacity, but we encounter the problem that the scenario tree does not guarantee that we get reserve capacity at all points in time, spread over different regions of the network. With a robust CFA, we can guarantee this.

We note that the ISOs (based on our experience) do not have stochastic simulators to tune their models. Instead, the ISOs use something even better - the real world! They design their policies in an online fashion, which avoids needing to create computer-based models. The only weakness is that they are unable to use this approach to design policies for high penetrations of wind, since this has not actually happened yet.

Illustration using offshore wind study

We have run a large number of simulations of SMART-ISO on a study of off-shore wind. SMART-ISO has been under development for four years at Princeton University where we have focused primarily on calibrating it against PJM. We then created a stochastic model of off-shore wind, using samples derived from actual onshore wind and wind forecasts. These samples have been shown to accurately reproduce the forecast errors. These errors were added to actual meteorological forecasts of offshore wind conditions, produced using the WRF meteorological model.

Below are a series of samples generated from the stochastic model for a single forecast (the black line), illustrating the type of variability that we are reproducing. We then created a total of 84 sample paths spanning four months (January/April/July/October), with three WRF forecasts per month (which produced a wide range of meteorological conditions) and 7 samples per forecast (as depicted above).

The simulations were run at 5 buildout levels, ranging from 8 GW up to 70 GW of wind generating capacity.

For each buildout level, the reserve parameters for SMART-ISO were recalibrated. The slide below illustrates the ramping reserves for each of the five buildout levels for January. These reserves are the smallest that would produce a run with no outages. We also show the reserves required if we used a perfect forecast. By tuning the reserve levels, we could get clean runs (no outages at all during an entire week) for all 84 sample paths (over the four months) for the first two buildout levels (up to 25 GW of wind generating capacity). Note that the base PJM reserve is 1300MW, so the reserve levels above are much higher (these are all in the form of spinning reserve from gas turbines).

At buildout level 3 (40 GW of capacity) we encountered a single outage for a single sample path in July (see the maroon bar below). The blue bars are the number of outages with the base PJM reserve levels (no problem at buildout 0, but problems were found everywhere else, for all months). There were also never any problems with perfect information. These experiments are showing that by doing nothing but tuning the current PJM reserve policy, we can handle 28 GW of wind, and we suspect that with some adjustments, we can get that to 40 GW. Note that we are accomplishing this while still lowering LMPs.

This is a preliminary study, but it shows that we can handle high levels of wind, using realistic, highly volatile sample paths for wind.