Artículo de revista
Learning in Combinatorial Optimization: What and How to Explore
Fecha
2020Registro en:
Operations Research Volumen: 68 Número: 5 Páginas: 1585-1604 Sep-Oct 2020
10.1287/opre.2019.1926
Autor
Modaresi, Sajad
Sauré Valenzuela, Denis
Vielma, Juan Pablo
Institución
Resumen
We study dynamic decision making under uncertainty when, at each period, a decision maker implements a solution to a combinatorial optimization problem. The objective coefficient vectors of said problem, which are unobserved before implementation, vary from period to period. These vectors, however, are known to be random draws from an initially unknown distribution with known range. By implementing different solutions, the decision maker extracts information about the underlying distribution but at the same time experiences the cost associated with said solutions. We show that resolving the implied exploration versus exploitation tradeoff efficiently is related to solving a lower-bound problem (LBP), which simultaneously answers the questions of what to explore and how to do so. We establish a fundamental limit on the asymptotic performance of any admissible policy that is proportional to the optimal objective value of the LBP problem. We show that such a lower bound might be asymptotically attained by policies that adaptively reconstruct and solve the LBP at an exponentially decreasing frequency. Because the LBP is likely intractable in practice, we propose policies that instead reconstruct and solve a proxy for the LBP, which we call the optimality cover problem (OCP). We provide strong evidence of the practical tractability of the OCP, which implies that the proposed policies can be implemented in real time. We test the performance of the proposed policies through extensive numerical experiments, and we show that they significantly outperform relevant benchmarks in the long-term and are competitive in the short-term.