A cornucopia of algorithms

Over the years, we have had fun developing a naming a series of algorithms. Just as the name "CASTLE Lab" has helped to define a research area, the names we have developed for our algorithms helps to identify and define important plateaus in our research. Below is a listing of some of our more creative algorithm names, and pointers to where they can be found.

Knowledge gradient - This is a method for efficiently collecting information from a discrete set of choices, and is intended for problems where a single measurement is considered "expensive", although the real meaning of this varies widely from one problem class to another. The knowledge gradient algorithm, faced with a (potentially large) set of potential measurements, chooses the measurement which offers the greatest marginal value. The value of a measurement depends on what is being done with the measurement. We may just be choosing the best of a (small) finite set of choices and we want the choice that performs the best (fastest, cheapest, biggest). In other settings, we may have an optimization problem (a shortest path problem, a linear program) where a measurement tells us something more about the cost or performance of a particular variable in the linear program. We want to make the measurement that produces the biggest impact on the optimization problem.

Knowledge gradient with correlated beliefs -This is simply the knowledge gradient modified to handle correlations in the beliefs about the performance of different choices. We may be comparing two different drugs, different teams of people, or measurements of a continuous function. KGCB is the knowledge gradient concept applied to problems where we feel that we know how to estimate the covariance structure between different choices. KGCB is an important variation because it allows us to consider problems where the number of potential measurements is much smaller than our budget for making measurements.

SPAR - Separable, Projective Approximation Routine - SPAR may well be the end of the line in our effort to adaptively estimate piecewise linear, concave functions. This approach solves stochastic optimization problems using sequences of separable approximations, where we use stochastic gradient information to update the slopes of the function. Since these updates can violate concavity, we use a projection method to obtain a new set of slopes that satisfy concavity.

Powell, W.B., A. Ruszczynski and H. Topaloglu, “Learning Algorithms for Separable Approximations of Stochastic Optimization Problems,” Mathematics of Operations Research (to appear). (Click here to download paper)

CAVE - Concave Adaptive Value Estimator - CAVE is starting to prove to be one of our most important tools. Assume that we know we have a concave function, but we can only estimate the slopes of the function at different points with some error. As a result, we may easily estimate a slope at a point that violates concavity. This algorithm ensures that each statistical estimate of the function maintains concavity. It can be applied to continuous functions or (our primary use) piecewise linear functions. See:

Godfrey, G. and W.B. Powell, "An Adaptive, Distribution-Free Algorithm for the Newsvendor Problem with Censored Demands, with Application to Inventory and Distribution Problems," Management Science, Vol. 47, No. 8, pp. 1101-1112, (2001). (Click here to download paper)

CUPPS - CUtting Plane and Partial Sampling algorithm - This algorithm is a specialization of nested Benders decomposition for multistage stochastic programs. It is a provably convergent algorithm (in the limit) which can handle thousands of scenarios per stage, and dozens (perhaps hundreds) of stages. But, it is restricted to right hand side randomness, and independence between stages. See:

Chen, Z.-L. and W.B. Powell, "A Convergent Cutting Plane and Partial Sampling Algorithm for Multistage Linear Programs with Recourse," Journal of Optimization Theory and Applications, Vol. 103, No. 3, pp. 497-524 (1999).(Click here to download paper)

LAMA - Linear Approximation Multiplier Adjustment - A multiplier adjustment method for the "LQN" formulation. We had been working on algorithms based around linear approximations of value functions. These algorithms typically used subgradient-type updates, which were easy to implement and produced good results, but generally exhibited more noise from one iteration to the next than we liked. The "LAMA" algorithm is a multiplier adjustment method that produced the best results for this problem class. See:

Carvalho, T. and W.B. Powell, "A Multiplier Adjustment Method for Dynamic Resource Allocation Problems," Transportation Science, Vol. 34, No. 2, pp. 150-164 (2000).

LQN - Logistics Queueing Network - This is a way of formulating dynamic resource allocation problems and is not an algorithm per se. At the time that this concept was introduced, the classical way of formulating fleet management problems was using a dynamic network. Here, we introduced the idea of formulating these problems as a series of double ended queues, made up of customers waiting for vehicles. The first formulation assumed that all the vehicles were the same, but all the customers were different. The challenge was deciding which customers should be served first if the number of vehicles were constraining. See:

Powell, W.B. and T. Carvalho, "Dynamic Control of Logistics Queueing Networks for Large Scale Fleet Management," Transportation Science, Vol. 32, No. 2, pp. 90-109, 1998.

SHAPE - Successive Hybrid Approximation ProcedurE - One of the challenges in solving two-stage stochastic programs is identifying value function (or recourse function) approximations that not only capture the structure of the problem but which also have nice algorithmic properties (for example, separable functions are much nicer than nonseparable functions. The SHAPE algorithm can take a nice nonlinear function, and then use stochastic gradients (which are usually easy to find) to "tilt" the function. For two stage, continuously differentiable functions, this is a provably convergent algorithm. Very easy to use and apply. See:

R. K.-L. Cheung and W.B. Powell, "SHAPE – A Stochastic, Hybrid Approximation Procedure for Two-Stage Stochastic Programs,” Operations Research, Vol. 48, No. 1, pp. 73-79 (2000). (Click here to download paper)

SCAM - Successive Convex Approximation Method - This algorithm was designed for stochastic, multistage resource allocation problems. In contrast with our earlier work using linear approximation, this method used "convex" approximations (we were minimizers back in those days), based on the "tree recourse" algorithm (we never came up with a catchy name for this method, but you can read about it in:

Cheung, R.K.-M. and W.B. Powell, "Stochastic Programs over Trees with Random Arc Capacities," Networks, Vol. 24, pp. 161-175 (1994).

The SCAM algorithm is described in:

Cheung, R.K.-M. and W.B. Powell, "An Algorithm for Multistage Dynamic Networks with Random Arc Capacities, with an Application to Dynamic Fleet Management," Operations Research, Vol. 44, No. 6, pp. 951-963. (1996).

SLAP - Successive Linear Approximation Procedure - The first of our algorithms for dynamic fleet management. Here, we found that we could form linear approximations of value functions, giving us sequences of approximations that could be solved optimally. See:

Frantzeskakis, L. and W. B. Powell, "A Successive Linear Approximation Procedure for Stochastic, Dynamic Vehicle Allocation Problems," Transportation Science, Vol. 24, No. 1, pp. 40 - 57 (1990).

(c) Warren B. Powell, 1997-2010