< Back to previous page

Project

Development of multimodal deep neural networks for spatial multi-omics data fusion

Biological processes are very complex in nature and analyzing them in the light of only one data source excludes important complementary information out of the study. Single-cell multi-omics, and more recently spatially resolved multi-omics technologies provide a multi-facetted view of cell identity, tissue microarchitecture, cellular communication and pathophysiology. The aim of this project is to develop novel methods that leverage multimodal data, coming from these technologies, to push the state of the art methods for cell type/state identification, cell niche characterization and whole-slide sample classification. These methods will leverage current state-of-the-art deep neural network architectures, adapted and improved to work with highly-dimensional multimodal data. They will be evaluated with both synthetic and real world datasets for single-cell and spatial multi-omics. We will compile the developed algorithms into a complete end-to-end data integration framework. By doing this, we aim to contribute to the next generation of molecular digital pathology, leading to new insights into disease, and improving current diagnostic and prognostic potential. This project will provide important resources and methods to analyze and integrate different data modalities that could be potentially applicable in any field, not only in biomedical sciences.

Date:1 Oct 2021 →  Today
Keywords:Deep learning, Multimodal data integration, Multi-omics analysis
Disciplines:Bioinformatics data integration and network biology, Computational biomodelling and machine learning, Development of bioinformatics software, tools and databases, Single-cell data analysis
Project type:PhD project