< Back to previous page

Project

Neural Networks under Epistemic Uncertainty for Robust Prediction

Although artificial intelligence (AI) has improved remarkably over the last few years, its inability to deal with fundamental uncertainty severely limits its application. This thesis will reimagine AI to properly treat the uncertainty stemming from our forcibly partial knowledge of the world. As currently practised, AI cannot confidently make predictions robust enough to stand the test of data generated by processes different (even by tiny details, as shown by ‘adversarial’ results able to fool deep neural networks) from those studied at training time. While recognising this issue under different names (e.g. ‘overfitting’), traditional Machine Learning (ML) seems unable to address it in non-incremental ways. As a result, AI systems suffer from brittle behaviour and find it difficult to operate in new situations, e.g. adapting to driving in heavy rain or to other road users’ different styles of driving, e.g. deriving from cultural traits. This thesis will reimagine AI by properly treating the uncertainty stemming from our forcibly partial knowledge of the world. Epistemic AI’s paradoxical principle is that AI should first and foremost learn from the data it cannot see.

Date:5 Nov 2021 →  Today
Keywords:Uncertainty, Epistemic AI
Disciplines:Adaptive agents and intelligent robotics
Project type:PhD project