< Back to previous page

Project

Fast, adaptive and wearable auditory attention decoding: towards practical neuro-steered hearing devices

Hearing aid users have difficulties to understand speech in noisy environments, which is why hearing aids are equipped with noise reduction algorithms. However, these algorithms often fail in so-called ‘cocktail-party’ scenarios with multiple speakers, because they do not know which speaker the user aims to attend to, and which speaker(s) should be treated as noise. Recent research has shown that it is possible to identify the attended speaker by decoding brain activity of the listener recorded with electroencephalography (EEG). Since this discovery, several research studies have combined such auditory attention decoding (AAD) algorithms with acoustic noise reduction algorithms as a proof of concept towards ‘neuro-steered’ hearing aids. However, a practical realization is not yet within reach due to 3 important roadblocks: (1) Current AAD decoders require more than 10 seconds of EEG data to reliably decode attention, which is too long for practical purposes. (2) Current AAD decoders are not able to adapt to the specific EEG signals of the end-user. (3) AAD experiments are typically conducted with bulky EEG recording devices, which can not be worn in daily life. In this project, we will address these 3 deal-breaking roadblocks by designing an adaptive data-driven AAD algorithm that exploits instantaneous brain lateralization (thereby making it fast enough for practical use), and which is amenable to a distributed realization in a wireless network of wearable EEG sensors.

Date:7 Oct 2020 →  Today
Keywords:Brain-computer interfaces, EEG signal processing, Hearing aids
Disciplines:Signal processing
Project type:PhD project