< Back to previous page

Project

Block-quantized and data-efficient DNNs on vector processors at the edge

Some of the most powerful machine learning algorithms, called deep learning algorithms or deep neural networks, require vast amounts of computations and memory during training. As a result, training a neural network is often not efficient, as it demands time, space, and energy. In this Ph.D., the aim is to examine ways to make existing training algorithms more hardware efficient and examine less conventional and novel training algorithms. Specifically, it will also be explored how the memory and computational complexity of training can be reduced while retaining good algorithmic accuracy. The final goal would be to make on-chip training efficient, even on the edge.

Date:3 Mar 2021 →  Today
Keywords:Deep learning, On-chip training, Artificial Intelligence
Disciplines:Machine learning and decision making, Modelling not elsewhere classified, Other electrical and electronic engineering not elsewhere classified
Project type:PhD project