AELARS

Towards the Accountable and Explainable Learning-enabled Autonomous Robotic Systems.

Deep learning - including deep reinforcement learning - is not only revolutionising computer science but also empowering various domains, including engineering, sociology, and health. Much of its success is due to deep neural networks which have demonstrated outstanding performance in various tasks such as image classification, natural language processing, games, and planning.

Solutions based on deep learning are now also widely deployed in real-world robotic systems; from environment perception to real-time planning and navigation. Unfortunately, the black-box nature and instability of deep neural networks is raising concerns about the readiness of this technology to be applied in safety-critical robotic systems.

Towards the Accountable and Explainable Learning-enabled Autonomous Robotic Systems

This project looks to bridge this research gap by unleashing the power of deep learning on safety-critical robotic systems in a safe, controllable and interpretable way. The major innovations of this project will sit on two levels:

  1. The algorithmic level, which will develop a series of safety verification and interpretation techniques that can assure deep learning models to output robust and explainable decisions with respect to adversarial perturbations.
  2. The application level, which aims to design and implement the project’s algorithms and tools on a real-world robotic platform so that the developed robust deep learning models can be practically applied to better support various safety-critical tasks.
Towards the Accountable and Explainable Learning-enabled Autonomous Robotic Systems

Lead Investigator: Dr. Wenjie Ruan, University of Exeter

Dr. Wenjie Ruan

Project Partners:

Partner Logos

For information about the project Towards the Accountable and Explainable Learning-enabled Autonomous Robotic Systems please contact Dr. Wenjie Ruan.