< Terug naar vorige pagina

Publicatie

Memory time span in LSTMs for multi-speaker source separation

Boekbijdrage - Boekhoofdstuk Conferentiebijdrage

© 2018 International Speech Communication Association. All rights reserved. With deep learning approaches becoming state-of-the-art in many speech (as well as non-speech) related machine learning tasks, efforts are being taken to delve into the neural networks which are often considered as a black box. In this paper it is analyzed how recurrent neural network (RNNs) cope with temporal dependencies by determining the relevant memory time span in a long short-term memory (LSTM) cell. This is done by leaking the state variable with a controlled lifetime and evaluating the task performance. This technique can be used for any task to estimate the time span the LSTM exploits in that specific scenario. The focus in this paper is on the task of separating speakers from overlapping speech. We discern two effects: A long term effect, probably due to speaker characterization and a short term effect, probably exploiting phone-size formant tracks.
Boek: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Pagina's: 1477 - 1481
ISBN:978-1-5108-7221-9
Jaar van publicatie:2018
BOF-keylabel:ja
IOF-keylabel:ja
Authors from:Higher Education
Toegankelijkheid:Open