< Back to previous page

Publication

Multi-modal deep network for RGB-D segmentation of clothes

Journal Contribution - Journal Article

In this Letter, the authors propose a deep learning based method to perform semantic segmentation of clothes from RGB-D images of people. First, they present a synthetic dataset containing more than 50,000 RGB-D samples of characters in different clothing styles, featuring various poses and environments for a total of nine semantic classes. The proposed data generation pipeline allows for fast production of RGB, depth images and ground-truth label maps.
Secondly, a novel multi-modal encoder–ecoder convolutional network is proposed which operates on RGB and depth modalities. Multi-modal features are merged using trained fusion modules which use multi-scale atrous convolutions in the fusion process. The method is numerically evaluated on synthetic data and visually assessed on real-world data. The experiments demonstrate the efficiency of the proposed model over existing methods.
Journal: Electron. Lett.
ISSN: 0013-5194
Issue: 9
Volume: 56
Pages: 432-434
Publication year:2020
  • ORCID: /0000-0002-2547-1517/work/121055310
  • DOI: https://doi.org/10.1049/el.2019.4150
  • WoS Id: 000530281100007
  • Scopus Id: 85084285046
  • ORCID: /0000-0002-2881-2727/work/82263188
  • ORCID: /0000-0001-7290-0428/work/84065684
CSS-citation score:1
Accessibility:Open