< Back to previous page

Project

New Methods for Self-Supervised Image Representation Learning

Representation learning plays a key role in most machine learning algorithms. When solving a particular task, the input is usually mapped to an intermediary latent space before being mapped to the final output space. We refer to the representation of an input as the set of feature values to which the input is mapped in the latent space. A good representation captures the essential information present in the data in a readily accessible format, while removing noise and redundancy. Consequently, the quality of a representation has a strong influence on the model performance and ease of training for any downstream tasks. This, in turn, may lead to more practical, safer and simply better AI applications, e.g. in the fields of autonomous driving or robotic surgery. The main goal of our research is to improve the performance of machine learning beyond the current state-of-the-art by improving the self-supervised representation learning paradigm on image data. In particular, we focus on approaches based on contrastive learning, as these have a strong theoretical underpinning in information theory and have shown promising initial results recently. We tackle our goal by unifying pixel-level and image-level schemes within a single framework and by generating novel context pairs both from images and videos.

Date:29 Apr 2021 →  Today
Keywords:Machine learning, Computer vision, Self-supervised learning, Representation learning
Disciplines:Computer vision, Data mining, Knowledge representation and reasoning
Project type:PhD project