Learning Model Constraints for Structured Prediction
Structured output prediction based on discriminatively trained probabilistic graphical models is a powerful framework that has lead to a large improvement in predictive systems. These models, however, often require strong a priori constraints to guarantee tractable inference procedures. These constraints can limit the power of the model to provide good predictions, and can therefore be viewed as a necessary evil. This project will develop statistical learning tools for structured output prediction that incorporate model constraints to ensure low-order polynomial time complexity of the inference procedure. Furthermore, these constraints will be learned from training data to maximize the expressiveness of the model class while probabilistically enforcing efficient inference in a fashion that is adaptive to the specific problem instance. The core motivation of this proposal is to provide tools for statistical prediction that enable fast and accurate prediction for complex outputs, such as image segmentations. Properties of the model determine the computational complexity of this inference procedure, which can be NP-hard in the worst case, but can also have low-order polynomial complexity in the best case. We therefore propose to study the relationship between model expressiveness and model tractability, leading to more accurate statistical learning methods for setting parameters of such models.