Title Promoter Affiliations Abstract "Non-deterministic semantics as a tool for modeling negation and modality." "Joke Meheus" "Department of Philosophy and moral sciences" "My research proposal provides a new perspective on the question of whether negation is a modality. According to one of the most popular approaches by Berto, negation is a particular kind of modality. He developed a formal framework based on possible worlds semantics. The first objective of this project is to use instead non-deterministic semantics and to develop an alternative framework that incorporates both negation and modality. To do so, I will further extend and adapt the existing results for non-deterministic modal logics. In particular, the non-deterministic approach that handles both weak modal logics (that do not have the standard possible world semantics) and various types of philosophically relevant negations (Boolean, paraconsistent) needs to be constructed. The project's ultimate goal is to compare the resulting framework with the one proposed by Berto. Such a comparison will be twofold: philosophical and technical. From the technical perspective, I will not only look at the limits of both frameworks, but also at their fine graininess, i.e., what logics can be captured in which framework and why. From the philosophical perspective, I will see in what sense one could consider the framework as modal and how it impacts on the interplay between negation and modality." "Structure from Semantics" "Tinne Tuytelaars" "Processing Speech and Images (PSI)" "The SfS project (Structure from Semantics) aims at developing a novel approach to 3D reconstruction, starting from a single RGB image as input. This new approach is completely semantics-driven: by analysing the image content, recognizing and localizing the objects therein, an estimate of the 3D of the whole scene will be obtained, building on a large dataset of 3D object category models that can be fitted to the objects depicted in the image. Whereas semantic 3D methods in the literature start from a 3D reconstruction and then add semantics, we start from the semantics, and derive the 3D from there. If successful, such semantics-driven 3D offers several advantages over traditional 3D reconstruction: the output is not just a 3D point cloud or 3D mesh, but a detailed model consisting of 3D objects, each with characteristic shape and material properties, placed in a 3D scene. This goes a step further than current state-of-the-art image understanding based on 2D bounding boxes. Moreover, since full 3D object models are fitted, new view synthesis without gaps in the model caused by occlusions then becomes possible." "Meaning change in token space: a token-based computational approach to diachronic prototype semantics" "Rik Vosters" "Centre for Linguistics, Brussels Institute for Applied Linguistics, Brussels Platform for Digital Humanities, Brussels Centre for Urban Studies, Linguistics and Literary Studies, Brussels Centre for Language Studies" "In recent years, the computational modeling of semantic change has witnessed an enormous growth, testified by the introduction of new techniques and the increasing availability of historical data. However, these studies lack a firm grounding in historical semantic scholarship. Conversely, historical lexical-semantic theorizing has not yet embraced the opportunities of data-driven approaches to substantiate its claims. The aim of this project is to provide a tighter connection between those two communities. Taking our starting point in the cognitive-semantic literature on semantic change, we will put the theoretical insights of diachronic prototype semantics to empirical test by means of token-based vector space models, a computational technique devised to model the meaning of individual corpus occurrences of words. Specifically, this project will operationalize and test the four central tenets of a prototype-theoretical conception of meaning change. This endeavor will be the first thorough, large-scale and semi-automatic empirical assessment of the descriptive power of said theory. The testing of these hypotheses will be carried out in four dedicated case studies on a historical corpus of Dutch newspapers of the 19th and 20th century." "Connecting morphosyntax and lexical semantics with Elastic Net regression" "Freek Van de Velde" "Quantitative Lexicology and Variational Linguistics (QLVL), Leuven" "This project proposes to use regularization methods from machinelearning, more specifically Elastic Net regression (and its siblingsRidge and Lasso), to look into lexical semantic effects inmorphosyntactic alternances. These regularization techniques applyshrinkage to the coefficients and can thus be used for variableselection, especially when the number of predictors is very large. Invariationist studies, this is often the case if one wishes to enterlexemes associated with a construction into a regression model topredict constructional variants. We combine the Elastic Net regulatorwith k-fold cross-validation - a standard procedure - to avoidoverfitting. Our approach mitigates the various drawbacks present inalternative approaches that are currently used in variationistlinguistics, like random factors in mixed models and collostructionalanalysis. We look at ten multifactorially driven alternances fromDutch. The project offers a transparent pipeline that can easily beextrapolated to other case studies, and to other languages." "Meaning change in token space: a token-based computational approach to diachronic prototype semantics" "Dirk Speelman" "Quantitative Lexicology and Variational Linguistics (QLVL), Leuven" "In recent years, the computational modeling of semantic change has witnessed an enormous growth, testified by the introduction of new techniques and the increasing availability of historical data. However, these studies lack a firm grounding in historical semantic scholarship. Conversely, historical lexical-semantic theorizing has not yet embraced the opportunities of data-driven approaches to substantiate its claims. The aim of this project is to provide a tighter connection between those two communities. Taking our starting point in the cognitive-semantic literature on semantic change, we will put the theoretical insights of diachronic prototype semantics to empirical test by means of token-based vector space models, a computational technique devised to model the meaning of individual corpus occurrences of words. Specifically, this project will operationalize and test the four central tenets of a prototype-theoretical conception of meaning change. This endeavor will be the first thorough, large-scale and semi-automatic empirical assessment of the descriptive power of said theory. The testing of these hypotheses will be carried out in four dedicated case studies on a historical corpus of Dutch newspapers of the 19th and 20th century. " "Language and Ideas: Towards a New Computational and Corpus-Based Approach to Ancient Greek Semantics and the History of Ideas" "Toon Van Hal" "Comparative, Historical and Applied Linguistics, Leuven" "Although corpus-based methods are becoming increasingly more common in humanities research, the possibilities for the ancient Greek corpus are still underexplored and hence restricted.This research project aims at(1) making decisive progress in the automated semantic annotation of Ancient Greek, by making use of a morphologically and syntactically annotated text corpus consisting of almost 40 million tokens and by applying distributional approaches to Greek diachronic semantics.(2) exploring its ensuing corpus-based possibilities for researchers comprising not only linguists and classicists, but also historians and philosophers. More specifically, it will focus on corpus-based solutions for ongoing problems in the study of language-related ideas expressed in Ancient Greek.It is the project’s aim to make the ancient Greek text corpus ‘semantically’ readable, both for humans and for computers." "Towards a Natural History of Formal Semantics. On the Development of Supposition Theory in Post-Medieval Logic (c. 1450- c. 1650)." "Russell Friedman" "De Wulf-Mansion Centre for Ancient, Medieval and Renaissance Philosophy, Centre for Logic and Philosophy of Science" "Supposition theory (ST) is one of the most important non-symbolic forerunners to formal semantics, a branch of linguistics and logic that is characterized by a mathematical (model-theoretic) approach to meaning. ST first emerged after the reintegration of Aristotelian thought in the later 12th c., and it disappeared with the dawn of modern logic in the later 19th c. ST was the predominant current in semantics for a period of more than 500 years, and it is one of the most important components of premodern logic. Research into supposition theory, however, is still in the early stages. To date, scholars have focused almost exclusively on the 13th and 14th c., and thus on only a fragment of the available sources. This project wants to shed new light on the history of ST by studying sources from the post-medieval period (c. 1450- c. 1650), a time frame that is hardly covered in the specialized literature. By means of historical and rational reconstructions of concrete theories, it aims to gain an insight into the many forms of ST during the post-medieval period. The core issues of the project include the relation between 'suppositio' and 'acceptio', and the principles of 'descensus', 'ascensus' and 'ampliatio'. " "Enhanced adaptive anomaly detection and root cause analysis by means of semantics and machine learning" "Filip De Turck" "Department of Information technology" "The sensor monitoring systems of today can detect anomalous behaviour & derive their underlying causes by either using expert-driven rules or by data-driven machine learning models. Expert-driven approaches require much human involvement to operate in environments providing the expert information on the fly. In contrast, data-driven approaches are more adaptive to specific changes but require large amounts of data to generalise well & have difficulties to interpret or derive the causes. In the end, a trade-off must be made between the non-adaptive approaches requiring a lot of human effort or the less interpretable models generating floods of alarms. To resolve these problems, the goal of this research is to autonomously incorporate the expert knowledge from the application domain into multiple parts of the data-driven learning process: - Expert knowledge will be used inside the anomaly detection tools, to reduce the number of falsely & underpredicted anomalies. - Cause analysing methods will derive the underlying reason for the detected unwanted behaviour by using a combination of interpretable detection models & the available expert knowledge. - Domain expertise together with the sensor observations will be used to profile the normal behaviour, resulting in a better understanding of the given data. Combining these three parts will result in an interpretable & adaptive sensor monitor tool, evaluated within the predictive maintenance & healthcare domain." "The role of semantics in modeling the bilingual mental lexicon." "Centre for Computational Linguistics, Psycholinguistics and Sociolinguistics (CLiPS)" "Bilinguals, people who simultaneously know and use two or more languages, are an interesting source of clues for discovering the internal make-up of our language system. Specifically, it is interesting how bilinguals are able to reliably access the right words in the right language without making mistakes, even though languages contain significant amounts of overlap in terms of semantics, orthography and phonology. In computational psycholinguistics, we model phenomena such as word retrieval via computer models. Despite the fact that we do not have access to the actual word store embedded in our mind, modeling can provide us with clues as to how it is organized, more particularly, by constructing models that can simulate key findings in psycholinguistic experiments. Having said that, current models for bilingual word reading can account for most of the facts, but largely underspecify a crucial component of our day-to-day word retrieval: meaning. Moreover, and related to this shortcoming, most models of word access have only modeled words in isolation. In reality, however, words are always embedded in sentences and larger linguistic and non-linguistic contexts, which also influence the way we access our words. By creating models of sentence processing, we can make sure that meaning has a more central role in our models, and thereby give new explanations for several phenomena in bilingual word processing." "Legal history meets lexical semantics. Feudal legal terminology in Flanders and England of the 13th and 14th centuries." "Dirk Heirbaut" "Department of Interdisciplinary Study of Law, Private Law and Business Law" "This project will study the intricacies of medieval multilingualism in law by researching the use of feudal legal terminology in England and the (Dutch-speaking) part of the county of Flanders during the 13th and 14th centuries in parallel to examining the development of the corresponding legal concepts in relation to land tenure. The linguistic landscape in both cases was inhabited by Latin, French and a Germanic language (Middle Dutch and Middle English) - though not in equal shares but in continuous and subtle intermingling, in particular of the French and Germanic languages and cultures. The specific issue to be studied is the rule of male primogeniture, which was, both in England and Flanders, a key concept that determined feudal law and around which all other parts of feudal law evolved. We anticipate to reveal a concordance between the development of feudal legal concepts and the evolution of the language used to describe them. This concordance will offer us a sharper understanding of how the feudal concepts and laws have evolved throughout the period under consideration. The most appropriate methodology is to examine the legal language in its textual context. The use of corpus linguistics methods and linguistics/concordance software allows for detailed searches of words and phrases in multiple contexts and among a large amount of electronically held texts, providing information on the data that is both quantitative and qualitative, empirical rather than intuitive."