Audio Visual Signal Processing
The Audio-visual Signal Processing (AVSP) Lab investigates novel methods for the automated interpretation of social and affective behaviour. Research areas include social signal processing, multisensor fusion, computer vision, ubiquitous computing, and machine learning. Our expertise is mainly focused on developing automated methods to analyse social signals from verbal and non-verbal behaviour with multiple sensing modalities (e.g. wearables, video, speech, audio, physiological, etc). The overarching goal is to employ and advance intuitive and mathematically principled signal representations and machine learning models for understanding and describing human behavior. Emphasis is to devise novel personalized machine learning models that can accurately capture future changes in the key biomarkers and cognitive scores related to Alzheimer’s Disease (AD) and other neurological conditions (Depression, Stress).