Title Promoter Affiliations Abstract "Multimodal Signal Analysis for Unobtrusive Characterization of Obstructive Sleep Apnea" "Sabine Van Huffel" "Laboratory of Respiratory Diseases and Thoracic Surgery (BREATHE), ESAT - STADIUS, Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics, Dynamical Systems, Signal Processing and Data Analytics (STADIUS)" "Obstructive sleep apnea (OSA) is the most prevalent sleep related breathing disorder, nevertheless subjects suffering from it often remain undiagnosed due to the cumbersome diagnosis procedure. Moreover, the prevalence of OSA is increasing and a better phenotyping of patients is needed in order to prioritize treatment. The goal of this thesis was to tackle those challenges in OSA diagnosis, by means of advanced signal processing algorithms, proposed in this thesis. Additionally, two main algorithmic contributions, which are generally applicable were proposed. The binary interval coded scoring algorithm was extended to multilevel problems and novel monotonicity constraints were introduced. Moreover, improvements to the random-forest based feature selection were proposed including the use of the Cohen’s kappa value, patient independent validation, and further feature pruning steered by the correlation between features.The first part of this thesis focused on the development of reliable, multimodal OSA screening methods based on unobtrusive measurements such as oxygen saturation (SpO2), electrocardiography (ECG), pulse photoplethysmography (PPG), and respiratory measures. The novel SpO2 model was the best performing OSA screening method, obtaining accuracies of over 88%, outperforming most of the state-of-the-art algorithms. Different multimodal OSA detection approaches were explored, but this performance could not be further improved. Finally, a main contribution of this PhD was to test the developed ECG and PPG OSA detection algorithms on unobtrusive signals, including capacitively-coupled ECG and bioimpedance, and wearable PPG recordings. Although these experiments showed promising results, the limitations of the current algorithms on the unobtrusive data were also highlighted.In the second part of this PhD a contribution towards a better characterization of OSA patients beyond the AHI was proposed. Novel pulse oximetry markers were developed and investigated to assess the cardiovascular status of OSA subjects. It was found that patients with cardiovascular comorbidities experienced more severe oxygen desaturations and incomplete resaturations to the baseline SpO2 values. The novel multilevel interval coded scoring was used to train a model to predict the cardiovascular status of OSA patients based on the age, BMI and the SpO2 parameters. The final model obtained good classification performances on a clinical population, but the predictive power of this model should be further validated." "Kernel Based Methods for Microarray and Mass Spectrometry Data Analysis" "Bart De Moor" "ESAT - STADIUS, Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics" "Kernel learning methods are advanced and powerful techniques that allow the construction of non-linear models for classification and regression problems.Microarray and mass spectrometry data sources measure the activity and/or expression of thousands of genes and proteins, respectively, on a given set of biological samples. Analysis of the information contained in such samples has become a crucial activity in cancer research during the last decade. However, common problems encountered on these biological data sources are related to: large number of variables compared to the number of examples, low signal to noise ratio, irrelevant variables and the presence of missing values and  outliers. Additionally, current methodologies are not totally well established and results are not always reproducible.The goal of the proposed research is mainly the application of existing kernel-based methods and their subsequent adaptation to the areas of microarray and mass spectrometry data analysis. Topics included are among others preprocessing, prediction/classificationmodels, variable selection (gene selection or biomarkeridentification), novelty detection. Model selection will play a central role in the construction of reliable and reproducible algorithms. " "Advanced Solutions for Neonatal Sleep Analysis and the Effects of Maturation" "Sabine Van Huffel" "Woman and Child, ESAT - STADIUS, Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics" "Worldwide approximately 11% of the babies are born before 37 weeks of gestation. The survival rates of these prematurely born infants have steadily increased during the last decades as a result of the technical and medical progress in the neonatal intensive care units (NICUs). The focus of the NICUs has therefore gradually evolvedfrom increasing life chances to improving quality of life. In this respect, promoting and supporting optimal brain development is crucial.Because these neonates are born during a period of rapid growth and development of the brain, they are susceptible to brain damage and therefore vulnerable to adverse neurodevelopmental outcome. In order to identify patients at risk of long-term disabilities, close monitoring of the neurological function during the first critical weeks is a primary concern in the current NICUs. Electroencephalography (EEG) is a valuable tool for continuous noninvasive brain monitoring at the bedside. The brain waves and patterns in the neonatal EEG provide interesting information about the newborn brain function. However, visual interpretation is a time-consuming and tedious task requiring expert knowledge. This indicates a need for automated analysis of the neonatal EEG characteristics. The work presented in this thesis aims at contributing to this.The first part of this thesis focuses on the development of algorithms to automatically classify sleep stages in preterm babies. In total three different strategies are proposed. In the first method, the problem is traditionally approached and a new set of EEG complexity features is combined with a classification algorithm. This analysis demonstrates that the complexity of the EEG signal is fundamentally different dependent on the vigilance state of the infant. Building on this finding, a novel tensor-based approach that detects quiet sleep in an unsupervised manner is presented.Finally, a deep convolutional neural network to classify neonatal sleep stages is implemented. This end-to-end model optimizes the feature extraction and classification model simultaneously, avoiding the challenging task of feature engineering.   The second part concentrates on the quantification of functional brain maturation in preterm infants. We establish that the complexity of the EEG time series is significantly positively correlated with the postmenstrual age of the neonate. Moreover, these promising biomarkers of brain maturity are used to develop a brain-age model. This model can accurately estimate the infant's age and thereby assess the functional brain maturation. In addition, the relationship between the early functional and structural brain development is investigated based on two complementary neuromonitoring modalities, EEG and MRI. Regression models show that the brain activity during the first postnatal days is related to the size and growth of the cerebellum in the subsequent weeks.At last, the influence of the thyroid function on the developing brain is examined in extremely premature infants. No significant association was observed between the change in free thyroxine concentrations during the first week of life and maturational features extracted from the EEG at term equivalent age. To shed more light on the precise relationship between thyroid function and brain maturation, prospective studies with a more homogeneous dataset are needed in the future." "MR spectroscopy quantitation results' evaluation" "Zenon Starcuk, Sabine Van Huffel" "ESAT - STADIUS, Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics" "The main topic of this PhD project is to assess fitting quality for MR Spectroscopy data. This aim should be achieved by statistical evaluation." "Generic Machine Learning algorithms for Real-time Human-Computer Interaction" "Johan Suykens" "ESAT - STADIUS, Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics, Electrical Engineering Technology (ESAT), Group T Leuven Campus, Computer Science Technology, Group T Leuven Campus" "The goal is to build a generic classifier for gesture interaction applications. This kind of applications use directed, conscious intentional interactions that are clearly defined in time and space, for example gestures for selection and manipulation. The classifier needs to be generic and can be applied to different types of sensors and different gestures.The sensors which will be discussed are inertial measurement units (a combination of accelerometers, gyroscopes and magnetometers) and touch screens. In a second phase, 3D cameras will also be studied. An important criterion for the classifier is accuracy. This makes sure the user is correctly understood and a smooth interaction is possible. Moreover, the user need to have the feeling the application reacts immediately, so the classification needs to be real-time.Machine learning techniqueswill be used to build this generic model. These techniques are not yet widely used in the human-computer interaction domain. However, machine learning techniques are concerned with the design and development of algorithms which learn to recognize patterns, and so these techniques are suitable for the problem of sensor data recognition. The research will focus on Support Vector Machines, a classification technique which analysesdata and recognizes patterns by transforming the data to a feature space and by searching the separating plane with maximal margin.To achieve these goals, the following research topics are considered. First,it needs to be investigated whether the generation and selection of features from the sensor data, the first step in the design of a classifier, can be generalized. Features should no longer be selected for each sensor type by an expert. Then it is examined how the classifier can deal with transform-invariant features. These are gestures for which size, position and angle doesnt matter. Thirdly, it is investigated how to automate the tuning of the parameters for the classifier (for example the regularization constant or the kernel function). Finally, it is investigated if the sensor can be continuously monitored without knowing the start and end signal, for example by using a sliding window." "Machine Learning for Energy Performance Prediction in Early Design Stage of Buildings" "Philipp Geyer" "ESAT - STADIUS, Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics, Building Physics Section, Architectural Engineering" "The early building design is an iterative process. In this process, architects and engineers evaluate different design concepts to ensure the design brief is fulfilled. The rising need for a design to adhere to certain performance has introduced building performance simulation (BPS) into the design process. Early design decisions have the highest impact on building performance. Therefore, it is important to make good design decisions at this stage of the design process.The computational effort, in terms of model development time and prediction time, is high for BPS. The computational effort combined with other factors like the need for detailed design information limits BPS in the early design stages. This research focusses on having computational methods that deliver the predictions as fast as possible. The speed to obtain a prediction is important for iterative early design, as the prediction models should be able to keep up with the thinking speed of a designer. For example, simple BPS takes about 5 minutes to make a prediction on design performance. Five minutes may appear like an insignificant amount of time; This time accumulates as more design needs to be evaluated. Moreover, during the creative process of a designer, his or her speed of iterating over design options is faster than a BPS model’s time to provide performance results. The slow nature of BPS results in limited design options evaluated for building performance and potentially missing design with ideal performance.Furthermore, computational speed becomes a challenge when designers with different educational backgrounds need to collaborate on a design. The low computational speed of BPS makes designers rely more on rule-of-thumb knowledge. The applied rule-of-thumb knowledge may or may not be valid for the proposed design problem. It is increasing the risk of not taking the right design decisions from a performance point-of-view. To overcome the challenge of computational speed, this research evaluates machine learning (ML) as an alternative method for building performance prediction. Reason for using ML is its high computational speed and prediction accuracy. However, ML models have to overcome challenges like generalization, reusability, and interpretability.This research evaluates generalization through two different approaches, which are component-based approach and deep learning. Both approaches model the relationship between building design parameters and design performance in hierarchies. Results indicate that ML models do generalize in unseen design cases, provided the evaluated design is similar to the nonlinearity present in the training data distribution. Within the ML algorithms, deep learning model architectures based convolutional neural networks (CNN) outperform traditional neural networks (NN). CNN is able to outperform traditional NN as it could extract features from the data in a hierarchical manner.The reusability of the ML model is evaluated to reduce the computational effort required in developing multiple ML models. Transfer learning and multi-task learning methods are evaluated to understand ML model reusability. Developing[1] two deep learning models sequentially takes ~22 minutes. This development time accumulates as the number of models to be developed increases. Results indicate that through both transfer learning and multi-task learning computation effort for model development can be reduced without compromising on model accuracy. The development times are reduced to ~14 minutes and ~8 minutes respectively.The interpretability of deep learning methods is evaluated through dimensionality reduction methods. Results indicate that deep learning models learn to re-organize the design space based on the design’s energy signature. Therefore, the trained model is similar to a top-down approach for predictions, in which, predictions are based on design similarity.Finally, results show that ML models predict design performance for 201 design options in 0.9 seconds, while the same results can be obtained from BPS in ~20 minutes. Showing that ML models are significantly faster than BPS. Results are giving an indication that ML models could indeed keep-up with the speed of a designer. The thesis elaborates more on the ML methods evaluated and the outcomes of the research. [1] Training the ML model. This is not the same as developing BPS during design." "Precision measurements of the electron energy distribution in nuclear beta decays" "Kazimierz Bodek, Nathal Severijns" "Nuclear and Radiation Physics" "Although nuclear β decays have been studied for almost a century, there are still questions that can be answered through precision measurements of the energy distribution of emitted electrons. The shape of the β spectrum reflects not only the weak interaction responsible for the decay. It is also sensitive to the strong interaction which confines the decaying quark in the nucleon. The electromagnetic force also plays a role since the ejected electron interacts with the charged nucleus. Therefore, measurements of the β spectrum shape are an important tool in understanding all the interactions combined in the Standard Model (SM) and high precision is required to disentangle the effects of different interactions. Moreover, these measurements can even shed some light on New Physics (NP), if discrepancies between the shape of a measured spectrum and the one predicted by the SM are observed.One of the open questions is the validity of the V A form of the weak interaction Lagrangian. Measurements of the β spectrum directly address this issue, since the exotic currents (i.e. other than V and A) affect the spectrum shape through the so-called Fierz interference term b. An experiment sensitive to NP requires a precision at the level of 10−3 to be able to compete with energy frontier measurements performed at the Large Hadron Collider.When precision is at stake,  the success of an experiment is determined by a proper understanding of its systematic uncertainties. The main limitation of the precision of spectra measurements comes from the properties of the energy detector being used in the experiment. In particular, from the accuracy of the detector energy response and of the energy deposition model in the detector active volume. The miniBETA spectrometer was designed to measure β spectra with a precision at the level of 10−3. It combines a plastic scintillator energy detector with a multi-wire drift chamber where electrons emitted from a decaying source are traced in three dimensions.Using the plastic scintillator minimizes systematic effects related to the model of the energy deposition in the detector active volume. Due to the low Z of the plastic and a choice of the detector thickness, the probability that the energy is fully deposited is highest. Keeping the probability of other processes (i.e. backscattering, bremsstrahlung, and transmission) low reduces the influence of their uncertainties on the predicted spectrum. Particle tracking filters the measured spectrum from events not originating from electrons emitted from the source. It also allows for a recognition of backscattering events, further reducing their influence.The thesis describes the commissioning of the miniBETA spectrometer. Several gas mixtures of helium and isobutane were tested as media for tracking particles. Calibrations of the drift chamber response were based on measurements of cosmic muons. The mean efficiency of detecting an ionizing particle in a single drift cell reached 0.98 for mixtures at 600 mbar and 0.95 at 300 mbar. The spatial resolution of a single cell, determined from the drift time, reached 0.4 mm and 0.8 mm, respectively. The resolution determined from the charge division was roughly an order of magnitude worse. Several corrections dealing with systematic effects were investigated, like a correction for a signal wire misalignment sensitive to shifts as small as 0.02 mm.It was shown that particle tracking improves the precision of spectrum measurements, as the response of the spectrometer depends on the position where the scintillator was hit. Preliminary calibration of the energy response of the spectrometer was performed with a 207Bi source. The calibration was based on fitting the measured spectrum with a spectrum being a convolution of simulated energy deposited in the scintillator and the spectrometer response. The energy resolution of the prototypical setup is ρ1MeV = 7.6%. The uncertainty of the scale parameter, hindering the extraction of energy dependent terms (i.e. the Fierz term and weak magnetism) from the spectrum shape, is 5 *10−3.   These values will be significantly improved in a new setup, with an optimized scintillator geometry and better homogeneity of the energy response." "Strategic Research Programme: High-Energy Physics at the VUB" "Jorgen D'Hondt" "Astronomy and Astrophysics Research Group, Theoretical Physics, Elementary Particle Physics, Physics" "Fundamental physics processes involved in the most energetic phenomena in the Universe, the nature of dark matter and the structure of primordial quark-gluon plasma using the CMS detector at the Large Hadron Collider and beyond this reach, the most energetic particles coming from the cosmos, as detected by the IceCube observatory." "Syndecan-PDZ scaffolds in the molecular and functional heterogeneity of extracellular vesicles: the role of syntenin in ncRNA transfer" "Pascale Zimmermann" "Department of Human Genetics" "Extracellular vesicles (EVs) emerge as organelles supporting cell-to-cell communication over short or long distances. They contribute to various physiological and pathological processes. They contribute to various systemic diseases, in particular cancer, neurodegeneration and pathogen invasion. EVs are limited by a lipid bilayer, and can be released by any type of cell. They have the same topology as the cell and contain a mixture of genetic material (DNA and RNA), proteins, and lipids. Secreted EVs dock on to the surfaces of recipient cells where they can transmit signals from the cell surface and/or transfer their contents into cells to elicit functional responses. The syndecan-syntenin pathway, originally discovered by the host laboratory, can contribute to up to 50% of the exosome population. Noteworthy, the laboratory also showed that syntenin is mandatory for the pro-migratory activity of exosomes [15]. The discovery that EVs can transport RNAs and non-coding (nc)RNAs between cells and that the transmission of these RNA molecules can modify the phenotype of the recipient cells has led to the suggestion that RNAs carried by EVs may play a previously unrecognized role in intercellular communication and launched the field of extracellular RNA biology. The mechanisms underlying the loading of RNAs into exosomes are still mostly unknown. Yet, a few RNA binding proteins have been found that are capable of selectively binding RNA molecules with specific motifs and inducing their export into exosomes. Here we aim to evaluate the role of syntenin in the exosomal transfer of ncRNAs. We will use the MCF-7 breast cancer cell line as EV-donor model and human vascular endothelial cells (HUVEC) as EV-recipient model. First, we will determine ncRNA composition of exosomes and lysates from MCF7-WT and MCF7 KO syntenin (preparation of smallRNA librairies). After, ncRNA in charge of angiogenesis fonctions or distributed differently will be determine. We will observe the effects of MCF7 exosome and ncRNA transfert in recipient cells like HUVEC. Finally we will study interaction between ncRNA and syntenin." "Solving Systems of Polynomial Equations" "Marc Van Barel" "Numerical Analysis and Applied Mathematics (NUMA), NUMA, Numerical Analysis and Applied Mathematics Section" "Systems of polynomial equations arise naturally from many problems in applied mathematics and engineering. Examples of such problems come from robotics, chemical engineering, computer vision, dynamical systems theory, signal processing and geometric modeling, among others. The numerical solution of systems of polynomial equations is considered a challenging problem in computational mathematics. Important classes of existing methods are algebraic methods, which solve the problem using eigenvalue computations, and homotopy methods, which track solution paths in a continuous deformation of the system. In this text, we propose new algorithms of both these types which address some of the most important (numerical) shortcomings of existing methods.Classical examples of algebraic techniques use Gröbner bases, border bases or resultants. These methods take advantage of the fact that the solutions are encoded by the structure of an algebra that is naturally defined by the equations of the system. In order to do computations in this algebra, the algorithms choose a representation of it which is usually given by a set of monomials satisfying some conditions. In this thesis we show that these conditions are often too restrictive and may lead to severe numerical instability of the algorithms. This results in the fact that they are not feasible for finite precision arithmetic. We propose the framework of truncated normal forms to remedy this and develop new, robust and stabilized methods. The framework generalizes Gröbner and border bases as well as some resultant based algorithms. We present explicit constructions for square systems which show `generic' behavior with respect to the Bézout root count in affine space or the Bernstein-Khovanskii-Kushnirenko root count in the algebraic torus. We show how the presented techniques can be used in a homogeneous context by introducing homogeneous normal forms, which offer an elegant way of dealing with solutions `at infinity'. For instance, homogeneous normal forms can be used to solve systems which define finitely many solutions in projective space by working in its graded, homogeneous coordinate ring. We develop the necessary theory for generalizing this approach to the homogeneous coordinate ring (or Cox ring) of compact toric varieties. In this way we obtain an algorithm for solving systems on a compactification of the algebraic torus which takes the polyhedral structure of the equations into account. This approach is especially effective in the case where the system defines solutions on or near the boundary of the torus in its compactification, which typically causes difficulties for other solvers. Each of the proposed methods is tested extensively in numerical experiments and compared to existing implementations.Homotopy methods are perhaps the most popular methods for the numerical solution of systems of polynomial equations. One of the reasons is that, in general, their computational complexity scales much better with the number of variables in the system than that of algebraic methods. However, the reliability of these methods depends strongly on some design choices in the algorithm. An important example is the choice of step size in the discretization of the solution paths. Choosing this too small leads to a large computational cost and prohibitively long computation times, while choosing it too large may lead to path jumping, which is a typical cause for missing solutions in the output of a homotopy algorithm. In this thesis, a new adaptive step size path tracking algorithm is proposed which is shown to be much less prone to path jumping than the state of the art software.  "