David Vigouroux

AI Research Engineer
IRT Saint Exupéry

David Vigouroux is a data science research engineer at IRT Saint Exupery. He coordinates explicability activities within the DEEL integrative program.

In 2008, he began his career at an Airbus subsidiary in the defense sector, working on decision support problems. He then joined a robotics start-up in 2016, participating in the design of a robot for control, vision, SLAM and hardware design tasks. In 2018, he joined the IRT Saint Exupery where he worked on unsupervised learning, explicability, bias detection, uncertainty management, anomaly detection and reinforcement learning.

Don't Lie to Me: Robust and Efficient Explainability with Verified Perturbation Analysis

CONFERENCE SUMMARY

Neural network explicability is a key element in the integration of neural networks in mission-critical systems. Indeed, it is important to gain a better understanding of the internal logic of models to ensure that the model is not biased. To this end, numerous explicability techniques have been proposed by the scientific community, such as feature attributions. This technique identifies the model inputs that have the greatest impact on the model's decision-making. To do this, feature attributions analyze the local behavior of the model around a given sample. This evaluation is carried out stochastically, by calling up the model many times. In order to obtain a good representation of the model's behavior, a very large number of calls is required. However, even with a very large number of calls, it is not possible to study exhaustively the behavior of the model in the vicinity of the given sample. To overcome this lack of exhaustiveness, we propose to use a "Verified Perturbation Analysis" technique, which provides a complete analysis around a given sample.