< Back to previous page

Project

Novel approaches to Predict-and-Optimize

This PhD is positioned under Conversational Human-Aware Technology for Optimisation (CHAT-Opt), a research project led by Professor Tias Guns, for which an ERC Consolidator Grant running from 2021 to 2026 has been given. The CHAT-Opt project is motivated by the observation that, in reality, solutions found by constraint programming solvers often do not fully match the outcomes desired by domain experts. In order to align this present discrepancy between expectation and result, CHAT-Opt works on three fronts: 1) Learning from the environment/context by integrating the process of making predictions with that of subsequently solving an optimization problem defined by those predictions 2) Learning the implicit preferences of the user by letting them interact with the solutions 3) Developing a conversational constraint solver that is able to answer questions and provide explanations about generated solutions This PhD is primarily concerned with the first of these directions. It will build on the recent machine learning paradigm of Predict-and-Optimize, in which the task of making predictions is followed by the task of solving a constrained optimization problem defined by those predictions. Such scenarios occur frequently in industry. Take as a guiding example the problem of energy-cost-aware scheduling, wherein workloads are to be optimally scheduled based on future energy prices. Of course, these prices cannot be known in advance, and thus have to be predicted before an approximately optimal schedule of workloads can be computed. The problem thus breaks down into two subsequent subproblems: (i) predicting future energy prices (Predict) and (ii) computing a schedule of workloads based on those predictions (Optimize). A naive solution to the composite problem involves tackling each subproblem separately. After all, the problem of making high-quality predictions has been widely studied in the past, as has the problem of solving constrained optimization problems efficiently. However, in the Predict-and-Optimize paradigm, further gains are achieved by recognizing that the subproblems are not truly separate, but are part of a larger problem. Specifically, one can take the context of the downstream optimization problem in account when training a model to be used for making accurate predictions. A key insight here is that not all prediction errors end up invoking an equal cost, even when they are of the same magnitude. Some prediction errors might affect the solutions of the resulting optimization problem more than others. To relate this back to our guiding example, consider that not all errors in the prediction of future energy prices would lead to an equal cost in terms of suboptimally scheduled workloads. In fact, some prediction errors might not even affect the resulting schedule at all, and thus would still result in an optimal schedule. For instance, say that for a certain time period energy is predicted to be very expensive, resulting in no workloads getting scheduled on that specific period. If the real cost of energy would then turn out to be ten times more expensive than expected, the prediction would have been very wrong, yet the resulting schedule would still be an optimal one, as it would have rightfully avoided scheduling workloads on the expensive time period and would be no different from the schedule that would have resulted from entirely accurate energy price predictions. In short, there is often a complex, nonlinear relationship between the prediction error and the degree of suboptimality of the solution to the resulting optimization problem. In training, this insight can be used to train a better predictive model. Specifically, one can focus one’s limited resources (limited training data, limited model architecture, limited training time, etc.) towards minimizing the prediction errors that invoke a large cost, rather than the ones that do not matter all that much, which is what the Predict-and-Optimize paradigm is all about. The aim of this PhD is to push the boundary of the Predict-and-Optimize paradigm, by developing novel approaches in several directions. One such direction involves a shift in learning paradigm. Currently, most Predict-and-Optimize methodologies are based on supervised learning, in which (historical) examples of that what is to be predicted have to be provided. However, such examples are not always readily available and might be difficult to acquire or produce. Therefore, a shift towards the reinforcement learning paradigm likely contains some merit. Another direction explores the relation between Predict-and-Optimize and the research field of automated planning. In this setting, it is a planning problem that takes the place of the optimization problem. Solutions to such problems are commonly complex and spread over a time dimension. Integrating these problems with the Predict-and-Optimize paradigm is likely to require specialized approaches in order to be feasible. The Predict-and-Optimize paradigm is only in its infancy and still allows for many new directions to be explored. We believe this PhD would provide a great opportunity for doing so.

Date:1 Nov 2021 →  Today
Keywords:Predict-and-optimize, Prediction, Optimization, Constrained optimization, Constraint programming, Machine learning
Disciplines:Operations research and mathematical programming, Machine learning and decision making
Project type:PhD project