< Back to previous page

Project

Design-Technology Co-optimization techniques for enablement of MRAM-based Machine Learning hardware

This PhD topic will focus to break the barrier of device engineering to enable true analog in memory computing for Machine Learning training algorithms. The approach adopted for such a device optimization would require optimization at every abstraction level of a computing system, starting from algorithm, architecture, circuits to device engineering. ML algorithms such as DNNs have realized important breakthroughs in a myriad of application domains. The core operations in DNN are MVMs and the dominant model today is to train DNN using software capabilities, which results in an extremely large consumption of energy. A DNN can be physically represented by crossbar arrays hardware with programmable resistors (referred to as weight memory devices). Ideally, DNN accelerators should consist of dense non-volatile memories, with large resistance (MOhm) and narrow parameters distribution. MRAM technology is a promising candidate for such an approach: their resistance can be arbitrarily tuned to reach values required for analogue MVMs. SOT and VCMA MRAM will be esplored due to their low writing requirement and design solutions will be conceived on the context of Hardware for ML.
Date:18 Feb 2022 →  31 May 2022
Keywords:AI accelerator, Machine Learning hardware, MRAM, MRAM-based hardware, Design-Technology co-optimization, Analog in-memory computing
Disciplines:Analogue, RF and mixed signal integrated circuits
Project type:PhD project