Below are some general purpose routines that we have developed.

MOLTE – Modular, Optimal Learning Testing Environment – This is a Matlab-based environment for comparing algorithms for offline and online learning with discrete alternatives.

Monotone ADP-Approximate dynamic programming when the value function increases monotonically with respect to each state variable.

Optimal learning software using the knowledge gradient (click here for more information)

The knowledge gradient with independent beliefs – Excel spreadsheet

The knowledge gradient with correlated beliefs – Matlab software

The knowledge gradient using a linear belief model – Matlab software

Sparse-KG – The knowledge gradient for a sparse-additive belief model

MOLTE – Modular, Optimal Learning Testing Environment

MOLTE is a Matlab-based environment for testing policies for learning the maximum of a function of a (small) set of discrete alternatives. The problems may be offline (ranking and selection) or online (multiarmed bandit). New problems can be easily added by encapsulating the data in its own .m file. Similarly, each algorithm is contained in its own .m file. A simple spreadsheet interface allows the user to guide which problems are being tested, and with which algorithms.

A user’s manual is available here.

The complete system, including the MOLTE modeling environment, .m problem files, .m algorithm files, the spreadsheet that determines which problems and algorithms are tested in a particular simulation, and the Matlab simulation environment can be downloaded by click on:

The MOLTE learning system

MOLTE was used for the experimental work in the following paper (which derives a finite-time bound for the knowledge gradient policy), is available at:

Yingfei Wang, Warren B. Powell, “A Modular Optimal Learning Testing Environment” – A paper describing MOLTE (under review).

Approximate dynamic programming for monotone value functions

There are a surprising number of dynamic programming problems where the value function increases monotonically with each dimension of the state variable. When this property arises, exploiting monotonicity can dramatically increase the rate of convergence of an approximate dynamic programming algorithm.

The software below, written in MatLab, requires the user to specify a cost function, a transition function, and a discrete set of actions. State variables may be continuous, but the software will discretize it on the fly to a user-specified granularity.

Readme file – Provides introduction to the monotone-ADP code

Simple storage application – This uses a simple charge/discharge storage problem to illustrate the code. It includes the matlab files and a simple demonstration problem.

Optimal stopping problem – This is a more interesting (and challenging) multidimensional optimal stopping application.

Sparse-KG – Knowledge gradient for sparse additive belief model

This package guides the sequential design of experiments for a finite set of alternatives, where the belief model is described by a sparse linear model. Our motivating application was guiding the selection of probes for an RNA molecule, to learn a high-dimensional belief model about the energy of the molecule which exhibited hundreds of sites.

A technical paper describing the methodology and the RNA application is available at:

Yan Li, Han Liu, W.B. Powell, “The Knowledge Gradient Policy using a Sparse Additive Belief Model,”

The software can be downloaded from

Sparse-KG software