< Back to previous page

Project

Fulbright grant Renata Turkes.

Deep learning has surely become a buzzword in the last decade, but rightly so: it is an extremely powerful tool that learns from large amounts of past data, which has significantly outperformed the previous state-of-the-art practices in image processing, language translation, speech and object recognition, biomedicine, drug design, etc. It is ubiquitous in our daily realities - Google Translate, Google Maps, Alexa, Siri, our phone's face or fingerprint unlock feature all rely on deep learning, and it will be of crucial importance for self-driving cars. The practical success of deep learning, however, goes far beyond theoretical understanding. How do deep neural networks work and learn? How well will the network generalize to unseen data? When does it fail, and how can this be avoided? My goal is to shed some light on the last question, by trying to identify the classes of problems for which deep learning performs poorly. In particular, we plan to examine some problems where we would expect topological data analysis to outperform the results obtained with deep neural networks. Topology studies shape, and we expect it to be better in detecting the number of connected components, holes and voids in higher dimensions, or shape convexity; but recent results indicate that the same might be true for detecting shape curvature. We plan to investigate this experimentally, by comparing the results obtained with the two approaches, on a number of diverse synthetic datasets and data available in the literature. In addition, deep learning is expected to underperform when there is not a lot of data available, or when the data is noisy - we will therefore also include such scenarios in our computational experiments. Topological features can thus be recommended as an alternative to deep learning whenever they promise a superior performance, but the findings will also provide us with inspiration on how to improve existing deep architectures, with, for example, an additional network layer for topological signatures, or topological loss functions for network's prediction error.
Date:1 Sep 2021 →  31 May 2022
Keywords:ARTIFICIAL INTELLIGENCE, INTERNET OF THINGS
Disciplines:Wireless communications, Automation, feedback control and robotics