< Back to previous page

Project

Embedded Learning and Optimization for Interaction-aware MPC

Autonomous navigation in uncertain environments, where vehicles (autonomous cars, robots, drones) must interact with other vehicles or other users (e.g., pedestrians) is a highly challenging control task. Thus, the development of real-time MPC strategies that take into account interaction and uncertainty is highly desirable. Traditional means of handling uncertainty such as robust or stochastic approaches are either too conservative or too risk-prone. Moreover, MPC methodologies typically assume that the uncertain behavior of surrounding users is completely exogenous, thus failing to take interaction into account. In order to explicitly take into account interactions, the aim of this PhD project is to develop stochastic models for the surrounding users whose probabilistic structure depends on the states and actions of other users. The transition probabilities are learned online using machine learning in a moving horizon fashion in order to account for dynamic environments. Furthermore, the proposed methodology will provide a natural mechanism for the real-time learning system to safely balance exploitation and exploration, as the optimal input will be exploratory (that is, associated with more uncertain state transitions) only if the risk is sufficiently low. The resulting NMPC formulations will lead optimal control problems that are more complex and of larger scale than what state-of-the-art embedded solvers can handle. The goal of this project is to develop embedded optimization and online learning algorithms for interaction-aware MPC for autonomous navigation in uncertain environments.

Date:24 Aug 2021 →  Today
Keywords:model predictive control
Disciplines:Automation and control systems
Project type:PhD project