< Terug naar vorige pagina

Publicatie

Generalization Error Bounds for Deep Unfolding RNNs

Boekbijdrage - Boekhoofdstuk Conferentiebijdrage

Recurrent Neural Networks (RNNs) are powerful models with the ability to model sequential data. However, they are often viewed as black-boxes and lack in interpretability. Deep unfolding methods take a step towards interpretability by designing deep neural networks as learned variations of iterative optimization algorithms to solve various signal processing tasks. In this paper, we explore theoretical aspects of deep unfolding RNNs in terms of their generalization ability. Specifically, we derive generalization error bounds for a class of deep unfolding RNNs via Rademacher complexity analysis. To our knowledge, these are the first generalization bounds proposed for deep unfolding RNNs. We show theoretically that our bounds are tighter than similar ones for other recent RNNs, in terms of the number of timesteps. By training models in a classification setting, we demonstrate that deep unfolding RNNs can outperform traditional RNNs in standard sequence classification tasks. These experiments allow us to relate the empirical generalization error to the theoretical bounds. In particular, we show that over-parametrized deep unfolding models like reweighted-RNN achieve tight theoretical error bounds with minimal decrease in accuracy, when trained with explicit regularization.
Boek: Uncertainty in Artificial Intelligence
Series: Uncertainty in Artificial Intelligence
Volume: 161
Pagina's: 1515-1524
Aantal pagina's: 10
Jaar van publicatie:2021
Toegankelijkheid:Open