Gabriel Laberge is a PhD Student at Polytechnique Montreal working under the supervision of Prof. Foutse Khomh and Prof. Mario Marchand. He received a M.Sc in Applied Mathematics from Polytechnic Montreal in 2020, which sparked his curiosity in statistics and machine learning. His research interests currently include post-hoc explainability, decision-making, uncertainty quantification, and causality.
Can we Derive Insight from Post-Hoc Explanations of Uncertain Models?
Post-hoc explainability tools are becoming increasingly available and easy to use. Indeed, online repositories are plentiful, and often they only require adding a few lines of codes to your ML pipeline to “explain” your model. However this wide availability comes with a caveat; it is possible for uninformed users to misuse these tools and drive incorrect conclusions from them. For instance, one could wrongly assume that certain features are important in the real-life mechanism that generated the data because post-hoc explanations of the available model suggest so. By jumping to such a conclusion, one completely ignores our uncertainty about the model.
In this presentation, we argue that explaining a single model is never enough if one is interested in deriving real-world insight from XAI methods.Taking inspiration from ensemble methods for uncertainty quantification, we propose to explain several models in parallel and provide users only with information on which all models agree (i.e reach a consensus).