< Terug naar vorige pagina

Publicatie

Multimodal Deep Unfolding for Guided Image Super-Resolution

Tijdschriftbijdrage - Tijdschriftartikel

The reconstruction of a high resolution image given a low resolution observation is an ill-posed inverse problem in imaging. Deep learning methods rely on training data to learn an end-to-end mapping from a low-resolution input to a high-resolution output. Unlike existing deep multimodal models that do not incorporate domain knowledge about the problem, we propose a multimodal deep learning design that incorporates sparse priors and allows the effective integration of information from another image modality into the network architecture. Our solution relies on a novel deep unfolding operator, performing steps similar to an iterative algorithm for convolutional sparse coding with side information; therefore, the proposed neural network is interpretable by design. The deep unfolding architecture is used as a core component of a multimodal framework for guided image super-resolution. An alternative multimodal design is investigated by employing residual learning to improve the training efficiency. The presented multimodal approach is applied to super-resolution of near-infrared and multi-spectral images as well as depth upsampling using RGB images as side information. Experimental results show that our model outperforms state-of-the-art methods.
Tijdschrift:  IEEE Trans Image Process
ISSN: 1057-7149
Volume: 29
Pagina's: 8443-8456
Jaar van publicatie:2020
  • WoS Id: 000562028700001
  • ORCID: /0000-0002-2928-4014/work/82138141
  • ORCID: /0000-0002-0688-8173/work/82138076
  • ORCID: /0000-0001-9300-5860/work/82137776
  • Scopus Id: 85090800001
  • DOI: https://doi.org/10.1109/tip.2020.3014729
CSS-citation score:1
Toegankelijkheid:Closed