Rigorous Graphical Explainable AI

For Higher-Risk Applications

Deep Neural Networks (DNNs) are common in many areas, such as human‐robot interaction (HRI), natural language processing and planning tasks, however, their use in high-risk industrial applications is limited by:

1. their lack of interpretability and explainability;

2. their unquantified robustness;

3. their requirement for large amounts of training data

Robust Robotic Manipulation of Physical Structures Under Water

This project will use newly developed combinations of advanced classical robust signal processing and mathematical techniques to provide explainable machine learning alternatives to common DNN tasks. This approach will make modelling assumptions explicit, while using a fraction of the variables (typically tens of millions) used with DNNs. This can greatly increase training efficiency, aid explainability, and improve robustness.

The project will evaluate and demonstrate this approach on a question/answer application using Equinor’s Volve Data repository which contains over 3 TB of oil field reports.

The project goals are to:

1. Show that the approach achieves comparable results to state-of-the-art DNN implementations - especially with ‘smaller’ data sets.

2. Provide an example data-driven, graphical explanation system that helps implementors understand the operation and limitations of the approach.

3. Show that the above increases developers’ and investors’ confidence in these techniques.

Lead Investigator: Professor Mike Chantler, Heriot-Watt University

Professor Mike Chantler

Project Partners:

Partner Logos

For information about the Rigorous Graphical Explainable AI for Higher-Risk Applications project please contact Prof. Mike Chantler.