Title Participants Abstract
"Lifted Probabilistic Inference by Variable Elimination (Gelifte probabilistische inferentie door variabele eliminatie)" "Nima Taghipour" "Representing, learning, and reasoning about knowledge are central to artificial intelligence (AI). A long standing goal of AI is unifying logic and probability, to benefit from the strengths of both formalisms. Probability theory allows us to represent and reason in uncertain domains, while first-order logic allows us to represent and reason about structured, relational domains. Many real-world problems exhibit both uncertainty and structure, and thus can be more naturally represented with a combination of probabilistic and logical knowledge. This observation has led to the development of probabilistic logical models (PLMs), which combine probabilistic models with elements of first-order logic, to succinctly capture uncertainty in structured, relational domains, e.g., social networks, citation graphs, etc. While PLMs provide expressive representation formalisms, efficient inference is still a major challenge in these models, as they typically involve a large number of objects and interactions among them. Among the various efforts to address this problem, a promising line of work is lifted probabilistic inference. Lifting attempts to improve the efficiency of inference by exploiting the symmetries in the model. The basic principle of lifting is to perform an inference operation once for a whole group of interchangeable objects, instead of once per individual in the group. Researchers have proposed lifted versions of various (propositional) probabilistic inference algorithms, and shown large speedups achieved by the lifted algorithms over their propositional counterparts. In this dissertation, we make a number of novel contributions to lifted inference, mainly focusing on lifted variable elimination (LVE). First, we focus on constraint processing, which is an integral part of lifted inference. Lifted inference algorithms are commonly tightly coupled to a specific constraint language. We bring more insight in LVE, by decoupling the operators from the used constraint language. We define lifted inference operations so that they operate on the semantic level rather than on the syntactic level, making them language independent. Further, we show how this flexibility allows us to improve the efficiency of inference, by enhancing LVE with a more powerful constraint representation. Second, we generalize the `lifting' tools used by LVE, by introducing a number of novel lifted operators in this algorithm. We show how these operations allow LVE to exploit a broader range of symmetries, and thus expand the range of problems it can solve in a lifted way. Third, we advance our theoretical understanding of lifted inference by providing the first completeness result for LVE. We prove that LVE is complete---always has a lifted solution---for the fragment of 2-logvar models, a model class that can represent many useful relations in PLMs, such as (anti-)symmetry and homophily. This result also shows the importance of our contributions to LVE, as we prove they are sufficient and necessary for LVE to achieve completeness. Fourth, we propose the structure of first-order decomposition trees (FO-dtrees), as a tool for symbolically analyzing lifted inference solutions. We show how FO-dtrees can be used to characterize an LVE solution, in terms of a sequence of lifted operations. We further make a theoretical analysis of the complexity of lifted inference based on a corresponding FO-dtree, which is valuable for finding and selecting among different lifted solutions. Finally, we present a pre-processing method for speeding up (lifted) inference. Our goal with this method is to speed up inference in PLMs by restricting the computations to the requisite part of the model. For this, we build on the Bayes-ball algorithm that identifies the requisite variables in a ground Bayesian network. We present a lifted version of Bayes-ball, which works with first-order Bayesian networks, and show how it applies to lifted inference."
"Lifted Inference and Learning in Statistical Relational Models (Eerste-orde inferentie en leren in statistische relationele modellen)" "Guy Van den Broeck" "Statistical relational models combine aspects of first-order logic and probabilistic graphical models, enabling them to model complex logical and probabilistic interactions between large numbers of objects. This level of expressivity comes at the cost of increased complexity of inference, motivating a new line of research in lifted probabilistic inference. By exploiting symmetries of the relational structure in the model, and reasoning about groups of objects as a whole, lifted algorithms dramatically improve the run time of inference and learning.The thesis has five main contributions. First, we propose a new method for logical inference, calledfirst-order knowledge compilation. We show that by compiling relational models into a new circuit language, hard inference problems become tractable to solve. Furthermore, we present an algorithm that compiles relational models into our circuit language. Second, we show how to use first-order knowledge compilation for statistical relational models, leading to a new state-of-the-art lifted probabilistic inference algorithm. Third, we develop a formal framework for exact lifted inference, including a definition in terms of its complexity w.r.t. the number of objects in the world. From this follows a first completeness result, showing that the two-variable class of statistical relational models always supports lifted inference. Fourth, we present an algorithm for approximate lifted inference by performing exact lifted inference in a relaxed, approximate model. Statistical relational models are receiving a lot of attention today because of their expressive power for learning. Fifth, we propose to harness the full power of relational representations for that task, by using lifted parameter learning. The techniques presented in this thesis are evaluated empirically on statistical relational models of thousands of interacting objects and millions of random variables."
"Making inference across mobilisation and influence research" "Joost Berkhout, Jan Beyers, Caelesta Braun, Marcel Hanegraaff, David Lowery" "Scholars of mobilisation and policy influence employ two quite different approaches to mapping interest group systems. Those interested in research questions on mobilisation typically rely on a bottom-up mapping strategy in order to characterise the total size and composition of interest group communities. Researchers with an interest in policy influence usually rely on a top-down strategy in which the mapping of politically active organisations depends on samples of specific policies. But some scholars also use top-down data gathered for other research questions on mobilisation (and vice versa). However, it is currently unclear how valid such large-N data for different types of research questions are. We illustrate our argument by addressing these questions using unique data sets drawn from the INTEREURO project on lobbying in the European Union and the European Unions Transparency Register. Our findings suggest that top-down and bottom-up mapping strategies lead to profoundly different maps of interest group communities."
"The influence of study-level inference models and study set size on coordinate-based fMRI meta-analyses" "Han Bossier, Ruth Seurinck, Simone Kuehn, Tobias Banaschewski, Gareth J Barker, Arun LW Bokde, Jean-Luc Martinot, Herve Lemaitre, Tomás Paus, Sabina Millenet, Beatrijs Moerkerke" "Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results."
"What I infer depends on who you are: The influence of stereotypes on trait and situational spontaneous inferences" "Kaat Van Acker"
"The influence of social categorization on trait inferences: ERP-data" "Laurens Van Der Cruyssen" "Previous research has shown that the mere categorization of individuals into minimal groups will evoke in-group favoritism. The current study combines the minimal group paradigm with an ERP-experiment of spontaneous trait inferences using an expectancy-violation paradigm. 53 subjects first performed a fake reaction-time (RT) task to allow social categorization (in-group/out-group). One group of subjects (SoCat) all received the same group label based on their results on the RT task, another group of subjects (NoCat) did not receive a group label after the RT task and continued directly to the ERP-recording. During the ERP-recording all subjects read behavioral sentences implying a personality trait of a protagonist. The protagonist either belonged to the in- or out-group, depending on the experimental condition (SoCat; NoCat) to which the subject belonged. The last sentence describing each protagonist was consistent, inconsistent or irrelevant to the previously implied trait. By comparing consistent versus inconsistent sentences, we can measure temporal as well as spatial localization of trait-inferences. We expect trait inferences about an in-group member to evoke a P3-component. Trait inferences about out-group members are expected to generate a P3-component as well, though with a later onset. A small difference in source localization is also expected, involving the medial prefrontal cortex at a more ventral location for in-group members than out-group members."
"The influence of lack of reference conditions on dosimetry in pre-clinical radiotherapy with medium energy x-ray beams" "Christopher Cawthorne" "Despite well-established dosimetry in clinical radiotherapy, dose measurements in pre-clinical and radiobiology studies are frequently inadequate, thus undermining the reliability and reproducibility of published findings. The lack of suitable dosimetry protocols, coupled with the increasing complexity of pre-clinical irradiation platforms, undermines confidence in preclinical studies and represents a serious obstacle in the translation to clinical practice. To accurately measure output of a pre-clinical radiotherapy unit, appropriate Codes of Practice (CoP) for medium energy x-rays needs to be employed. However, determination of absorbed dose to water (Dw) relies on application of backscatter factor (Bw) employing in-air method or carrying out in-phantom measurement at the reference depth of 2 cm in a full backscatter (i.e. 30 × 30 × 30 cm3) condition. Both of these methods require thickness of at least 30 cm of underlying material, which are never fulfilled in typical pre-clinical irradiations. This work is focused on evaluation the effects of the lack of recommended reference conditions in dosimetry measurements for pre-clinical settings and is aimed at extending the recommendations of the current CoP to practical experimental conditions and highlighting the potential impact of the lack of correct backscatter considerations on radiobiological studies."
"On the influence of reference Mahalanobis distance space for quality classification of complex metal parts using vibrations" "Liangliang Cheng, Vahid Yaghoubi Nasrabadi, Wim Van Paepegem, Mathias Kersemans" "Mahalanobis distance (MD) is a well-known metric in multivariate analysis to separate groups or populations. In the context of the Mahalanobis-Taguchi system (MTS), a set of normal observations are used to obtain their MD values and construct a reference Mahalanobis distance space, for which a suitable classification threshold can then be introduced to classify new observations as normal/abnormal. Aiming at enhancing the performance of feature screening and threshold determination in MTS, the authors have recently proposed an integrated Mahalanobis classification system (IMCS) algorithm with robust classification performance. However, the reference MD space considered in either MTS or IMCS is only based on normal samples. In this paper, an investigation on the influence of the reference MD space based on a set of (i) normal samples, (ii) abnormal samples, and (iii) both normal and abnormal samples for classification is performed. The potential of using an alternative MD space is evaluated for sorting complex metallic parts, i.e., good/bad structural quality, based on their broadband vibrational spectra. Results are discussed for a sparse and imbalanced experimental case study of complex-shaped metallic turbine blades with various damage types; a rich and balanced numerical case study of dogbone-cylinders is also considered."
"Class composition as a frame of reference for teachers? The influence of class context on teacher recommendations" "Simon Boone, Sarah Thys, Mieke Van Houtte, Piet Van Avermaet" "Teacher recommendations are an important factor in the process of track placement, but research has shown that they are biased by pupils’ social background. Pupils from higher socio-economic backgrounds are more likely to get the advice to enrol in an academic track than pupils from lower socio-economic backgrounds, irrespective of prior achievement. Previous studies looked primarily at individual pupil or parent characteristics and their influence on teacher recommendations. However, in this article, the authors argue that the class context forms the frame of reference within which a teacher forms his/her recommendation for pupils. Therefore, this article investigates class composition effects on teacher recommendations at the transition between primary and secondary education in Flanders. More specifically, we look at the socio-economic, ethnic and ability composition of a class. Multilevel logistic models were tested on data collected in 36 primary schools in the cities of Ghent and Antwerp in May 2015. The results show that only the ability composition of the classroom exerts a frame-of-reference effect on teacher recommendations for academically versus practically oriented tracks. A pupil with a low individual ability in a low-ability class was more likely to get the advice to enrol in an academically oriented track than an equally able pupil in a high-ability class. This study demonstrated the limited importance of class composition in teacher recommendations, but calls for more research on teacher bias in the process of track assignment."
"Influence of the pinch-point-temperature difference on the performance of the Preheat-parallel configuration for a low-temperature geothermally-fed CHP" "Sarah Van Erdeweghe, William D'haeseleer" "In this work, we investigate the performance of the so-called Preheat-parallel CHP configuration, for the connection to a thermal network (TN). A low-temperature geothermal source (130°C), and the connection to a 75°C/50°C and a 75°C/35°C thermal network are considered. For a pure parallel CHP configuration, the brine delivers heat to the ORC and the thermal network in parallel. However, after having delivered heat to the ORC, the brine in the ORC branch still contains some energy which is not used. The Preheat-parallel configuration utilizes this heat to preheat the TN water before it enters the parallel branch, where the TN water is heated to the required supply temperature. The Preheat-parallel configuration is especially favorable when connected to a thermal network with a low return temperature, a large temperature difference between supply and return temperatures—thereby exploiting the preheating-effect—and for high heat demands. In this paper, we focus on the effect of the pinch-point-temperature difference (∆Tpinch) on the plant performance. ∆Tpinch is directly related with the size and cost of the heat exchangers and strongly influences the preheating-effect, which is the most characteristic feature of the Preheat-parallel configuration. First, we present the results of a detailed sensitivity analysis of ∆Tpinch. A higher ∆Tpinch results in a lower preheating-effect, a lower net power output and, correspondingly, lower plant efficiency. Furthermore, we compare the performance of the Preheat-parallel configuration with the convenient parallel and series CHP configurations. For all three configurations, the performance decreases with an increase of ∆Tpinch. For the considered thermal network requirements, the net power generation is the highest for the Preheat-parallel configuration. With respect to the parallel configuration, the gain in net power generation stays approximately constant (75°C/35°C TN) or decreases (75°C/50°C TN) with the imposed pinch-point-temperature difference. With respect to the series configuration, the gain in net power generation increases for a higher value of ∆Tpinch. This means that the impact of ∆Tpinch is the biggest for the series configuration, followed by the Preheat-parallel configuration, and that the impact on the performance of the parallel configuration is the smallest."