< Back to previous page

Project

Self-learning, stepped sensor interface

With the miniaturization of electronic hardware, it becomes feasible to take portable devices equipped with sensors with us every time of the day. Many applications build on this to provide us more direct feedback on our well-being and our environment, or to provide context-aware services. However, the energy consumption of the required always-on sensors impedes its practical realization as it is currently infeasible to keep a collection of sensors continuously activated on our mobile devices. A good example is a microphone with advanced sound/speech processing to e.g. detect certain keywords being spoken, or certain persons being present.

This project will design a stepped-sensing interface for such always-on sound/speech sensor. The flexible interface is capable of operating in several modes and to selectively disable and reconfigure several sub-blocks (analog, digital as well as processor subsystems). This allows to at any time provide the minimal performance required under current circumstances, so consuming no more energy than necessary. Through deeply embedded machine learning at the hardware level, the smart interface autonomously determines when it should switch from one mode to another, depending on current operating conditions and on experience built up over time. All of this finally has to happen with minimal hardware footprint, in order to not penalize the total system’s energy consumption.

Date:1 Jan 2013 →  31 Dec 2016
Keywords:Sensor, Interface
Disciplines:Nanotechnology, Design theories and methods, Artificial intelligence, Cognitive science and intelligent systems