Blending Robust and Stochastic Optimization


Hybrid Robust-Stochastic Optimization Models and Bounds on their Generalization Error

In practical applications, input data to optimization problems is usually unreliable and affected by errors and noise. Thus, solutions computed by classical algorithms may perform poorly or may be infeasible in practice. Robust and stochastic optimization techniques can be used in order to remedy these effects. In robust optimization, the goal is to obtain solutions to optimization problems which are evaluated with respect to their worst-case performance, given an additional set of scenarios of input data which may occur. In stochastic optimization, for the same scenario set, the goal is to obtain solutions whose expected performance is optimum (given an additional probability distribution on the set of scenarios). The approach in robust optimization can be too pessimistic in practice. At the same time, a probability distribution in order to apply stochastic optimization techniques may not be available. Additionally, stochastic optimization techniques usually involve additional difficulties from a computational point of view.

In this project, we analyze the performance of a novel parameterized hybrid optimization framework which contains the robust model and a full stochastic model as its extremes. The hybrid framework aggregates the scenario set into classes of scenarios whose appearance is assumed to be equally likely. In each class, the worst-case performance is assumed and the goal in the optimization task is to obtain solutions whose expected worst-case performance is best possible.

We are interested in bounds on the generalization error of the hybrid model when only a small sample of the full scenario set is available. That is, we are interested in the ratio between the optimum solution value of the hybrid model given a small sample of the full scenario set, and the solution value when the hybrid model has access to the full probability distribution. In a preliminary work regarding the static robust maximum flow problem, we were able to obtain lower and upper bounds on the generalization error [1]. In the future, we would like to generalize these results to more general combinatorial optimization problems.

This project is contributed by Fabian Mies, Britta Peis, and Andreas Wierz