< Back to previous page

Project

Sparse representations for static and dynamic point clouds based on deep learning (FWOSB88)

3D technologies are attracting an enormous amount of academic and industrial interest due to the growing number of possible applications. A first step when creating a 3D model of the real world is to capture depth information using a depth-sensing technology. The resulting depth maps produced by one or multiple depth sensing devices are
re-projected to form a point cloud. One of the main goals in this research proposal is to use deep learning techniques to find bases such that sparse representations of static and dynamic point clouds are found. In this context, one goal of this project is to construct sparse representations for static point clouds based on deep learning. The results will then be applied on 3D surface reconstruction of static point clouds. The second part of the project investigates sparse representations for dynamic point clouds. The potential will then be shown on object classification. By using information from multiple frames, the results are expected to outperform the current state-of-the-art.

Constructing sparse representations of point clouds is a very novel approach to process such unstructured data. This approach is expected to lead to outstanding results in several domains, and will potentially lead to real-time and reduced processing power applications. The application domains are numerous, including 3D camera processing, camera monitoring and tracking, 3D video, augmented reality, and many more.
Date:1 Nov 2019 →  31 Oct 2023
Keywords:sparse representation of point clouds, 3D surface reconstruction
Disciplines:Computer vision, Image and language processing, Pattern recognition and neural networks, Data visualisation and imaging