< Back to previous page

Publication

Neural Probabilistic Logic Programming

Book - Dissertation

In the past decade, deep learning has revolutionized many applications in artificial intelligence (AI), ranging from image classification to natural language processing. After this initial success, more and more researchers are now encountering the limitations of deep learning. Whereas it excels at pattern recognition tasks on high-dimensional data, it struggles with reasoning and generalization. These strengths and weaknesses are complementary to those of symbolic AI.
An integration of symbolic (logic) and sub-symbolic AI (deep learning) could thus bring the best of both worlds. This is the domain of the field of neural-symbolic AI. This thesis discusses three main contributions. The field of neural-symbolic AI is moving very quickly and could benefit from an overview of the many different approaches that categorizes them along well-know concepts from a different, but related field. Therefore we propose a categorization of the field of neural-symbolic AI based on a comparison to the well-established field of StarAI, which also focuses on integrating reasoning and learning. Next, we introduce our neural-symbolic framework called DeepProbLog. DeepProbLog integrates logic and neural networks by introducing the concept of the neural predicate to the probabilistic logic programming language ProbLog. It distinguishes itself from other neural-symbolic methods as it features both probabilistic logic, a fully expressive logic programming language, and neural networks. We evaluate this framework on four sets of experiments that showcase DeepProbLog's ability to: 1) integrate logical reasoning and deep learning, 2) integrate probabilistic reasoning and deep learning, 3) perform program induction through parameter learning, and 4) manipulate embeddings and perform natural language reasoning. The results for these experiments show that DeepProbLog is able to outperform neural networks on tasks requiring reasoning and learning, and that it is also able to outperform other neural-symbolic frameworks. One of the main drawbacks of DeepProbLog is that its inference does not scale well. That's why our final contribution is an approximate inference technique for neural probabilistic logic programming, called DPLA. Reasoning probabilistically over the entire output of the neural networks can become prohibitively expensive. It replaces the standard search for proofs by an informed A*-based search that makes inference scalable by only considering a subset of all proofs. However, combining approximate inference with learning brings its own set of challenges. To solve these, we consider the curriculum learning setting and introduce exploration based on the UCB algorithm. We evaluate DPLA* on a set of experiments that showcase that it scales better than DeepProbLog and other neural-symbolic frameworks and can be applied to a larger set of tasks. We conclude this thesis by discussing possible future directions for DeepProbLog and by discussing open challenges for the field of neural-symbolic AI in general.
Publication year:2021
Accessibility:Open