< Back to previous page

Project

Efficient Hardware-Software Architectures for Deep-Learning Applications in IoT Architectures (R-9431)

After the technological waves of computing; the internet; ubiquitous mobile communication we are currently experiencing a new technological wave of "deep learning", whereby systems are not preprogrammed for certain applications, but can learn complex tasks by themselves: either in a supervised or unsupervised way. Recently the potential and applicability of deep learning has been demonstrated in various application domains such as image recognition, image classification, object detection, speech recognition, automatic language translation, storytelling etc ... Popular models use a deep layering of artificial neural networks with millions of weights. The computational load for training is huge. Also the inference of specific recognition instances requires a lot of computation. Such computations are often done in data centers of large internet companies, or using power hungry general purpose GPU's and floating point calculations. This Ph.D. research aims at developing novel hardware/software architectures for deep learning applications in embedded and Internet-of-Things (IoT) applications. Hereby special emphasis is taken on aspects of ultra-low power consumption, dedicated processing and memory architectures for activation- and weight management. Depending on the application requirements, trade-offs for low fixed-point down to single bit-widths for both activations and weights, recognition accuracy and power consumption should be possible.
Date:1 Jan 2019 →  Today
Keywords:Hardware Architectures
Disciplines:High performance computing, Modelling and simulation, Computer graphics, Human-computer interaction, Virtual reality and related simulation, Computer vision, Interactive and intelligent systems, Pattern recognition and neural networks, Smart sensors, Processor architectures
Project type:Collaboration project