< Terug naar vorige pagina

## Publicatie

# Lifted Probabilistic Inference by Variable Elimination (Gelifte probabilistische inferentie door variabele eliminatie)

### Boek - Dissertatie

Representing, learning, and reasoning about knowledge are central to artificial intelligence (AI). A long standing goal of AI is unifying logic and probability, to benefit from the strengths of both formalisms. Probability theory allows us to represent and reason in uncertain domains, while first-order logic allows us to represent and reason about structured, relational domains. Many real-world problems exhibit both uncertainty and structure, and thus can be more naturally represented with a combination of probabilistic and logical knowledge. This observation has led to the development of probabilistic logical models (PLMs), which combine probabilistic models with elements of first-order logic, to succinctly capture uncertainty in structured, relational domains, e.g., social networks, citation graphs, etc. While PLMs provide expressive representation formalisms, efficient inference is still a major challenge in these models, as they typically involve a large number of objects and interactions among them. Among the various efforts to address this problem, a promising line of work is lifted probabilistic inference. Lifting attempts to improve the efficiency of inference by exploiting the symmetries in the model. The basic principle of lifting is to perform an inference operation once for a whole group of interchangeable objects, instead of once per individual in the group. Researchers have proposed lifted versions of various (propositional) probabilistic inference algorithms, and shown large speedups achieved by the lifted algorithms over their propositional counterparts. In this dissertation, we make a number of novel contributions to lifted inference, mainly focusing on lifted variable elimination (LVE). First, we focus on constraint processing, which is an integral part of lifted inference. Lifted inference algorithms are commonly tightly coupled to a specific constraint language. We bring more insight in LVE, by decoupling the operators from the used constraint language. We define lifted inference operations so that they operate on the semantic level rather than on the syntactic level, making them language independent. Further, we show how this flexibility allows us to improve the efficiency of inference, by enhancing LVE with a more powerful constraint representation. Second, we generalize the `lifting' tools used by LVE, by introducing a number of novel lifted operators in this algorithm. We show how these operations allow LVE to exploit a broader range of symmetries, and thus expand the range of problems it can solve in a lifted way. Third, we advance our theoretical understanding of lifted inference by providing the first completeness result for LVE. We prove that LVE is complete---always has a lifted solution---for the fragment of 2-logvar models, a model class that can represent many useful relations in PLMs, such as (anti-)symmetry and homophily. This result also shows the importance of our contributions to LVE, as we prove they are sufficient and necessary for LVE to achieve completeness. Fourth, we propose the structure of first-order decomposition trees (FO-dtrees), as a tool for symbolically analyzing lifted inference solutions. We show how FO-dtrees can be used to characterize an LVE solution, in terms of a sequence of lifted operations. We further make a theoretical analysis of the complexity of lifted inference based on a corresponding FO-dtree, which is valuable for finding and selecting among different lifted solutions. Finally, we present a pre-processing method for speeding up (lifted) inference. Our goal with this method is to speed up inference in PLMs by restricting the computations to the requisite part of the model. For this, we build on the Bayes-ball algorithm that identifies the requisite variables in a ground Bayesian network. We present a lifted version of Bayes-ball, which works with first-order Bayesian networks, and show how it applies to lifted inference.

Jaar van publicatie:2013

Toegankelijkheid:Open