Keynotes

Dr Luc JULIA

Dr Luc Julia

 
Renault
Scientific Director

> Biography

Dr. Luc JULIA is Chief Scientific Officer of Renault. He was CTO and Vice President for Innovation at Samsung Electronics, led Siri at Apple, was CTO at Hewlett-Packard and co-founded several start-ups in Silicon Valley. While pursuing his research at SRI International, he helped found Nuance Communications, today the world leader in speech recognition.Chevalier of the Légion d'Honneur and member of the Académie Nationale des Technologies, he graduated in mathematics and computer science from the Université Pierre et Marie Curie in Paris, and obtained a PhD in computer science from the Ecole Nationale Supérieure des Télécommunications in Paris.He is the author of the bestseller "L'intelligence artificielle n'existe pas", holds dozens of patents and is recognized as one of the 100 most influential French developers in the digital world.

Aaditya Ramdas

Aaditya Ramdas

 
Carnegie Mellon University
Professor

>Keynote

Remote presentation in English 🇬🇧

Predictive uncertainty quantification under distribution shift.

There are two dominant ways to quantify prediction uncertainty. The first is to use prediction sets, and the second to require calibrated probabilistic predictions. The two dominant paradigms for achieving these - without making distributional assumptions on the data - are conformal prediction and post-hoc binning. This talk will briefly recap these core ideas (under the setting where the data are i.i.d.) and show how to extend them to practical settings where the distribution may drift or shift over time.

 

> Biography

Aaditya Ramdas (PhD, 2015) is an assistant professor at Carnegie Mellon University, in the Departments of Statistics and Machine Learning. He was a postdoc at UC Berkeley (2015-2018) and obtained his PhD at CMU (2010-2015), receiving the Umesh K. Gavaskar Memorial Thesis Award. His undergraduate degree was in Computer Science from IIT Bombay (2005-09), and he did high-frequency algorithmic trading at a hedge fund (Tower Research) from 2009-10.

Aaditya was an inaugural recipient of the COPSS Emerging Leader Award (2021), and a recipient of the Bernoulli New Researcher Award (2021). His work is supported by an NSF CAREER Award, an Adobe Faculty Research Award (2019), an ARL Grant on Safe Reinforcement Learning, a Block Center Grant for election auditing, a Google Research Scholar award (2022) for structured uncertainty quantification, amongst others. He was a CUSO lecturer (2022) and will be a Lunteren lecturer in 2023.Aaditya's main theoretical and methodological research interests include selective and simultaneous inference (interactive, structured, online, post-hoc control of false decision rates, etc), game-theoretic statistics (sequential uncertainty quantification, confidence sequences, always-valid p-values, safe anytime-valid inference, e-processes, supermartingales, etc), and distribution-free black-box predictive inference (conformal prediction, calibration, etc). His areas of applied interest include privacy, neuroscience, genetics and auditing (elections, real-estate, financial), and his group's work has received multiple best paper awards.

Catuscia Palamidessi

Catuscia Palamidessi

 
INRIA
Research Director

>Keynote

English presentation 🇬🇧

Private collection of mobility data.

The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy. In this talk, I will discuss some frameworks to understand and mitigate the issue, focusing on iterative methods coming from information theory and statistics, and on the application to location data. In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to preserve the utility of the data. In this regard, a useful tool to recover utility is the iterative Bayesian update (IBU), an instance of the expectation-maximization method from statistics. I will show that the IBU, combined with a version of DP called 𝑑 -privacy (also known as metric differential privacy), outperforms the state-of-the-art, which is based on algebraic methods combined with the randomized response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). 

> Biography

Inria research director in the Comete project team and member of the École polytechnique Computer Science Laboratory (LIX - CNRS/École polytechnique/inria).
Catuscia Palamidessi's research is characterized by the application of mathematical and logical methods to computer science. She has worked in various fields, including concurrency theory, where she proved separation results between synchronous and asynchronous communication, as well as in the fields of security and privacy, where she proposed a variant of the "differential privacy" framework, with applications to the protection of location information ("geo-indistinguishability"). More recently, she has begun to explore the ethical challenges of artificial intelligence, in particular fairness and the control of information leakage in machine learning.

François Sillion

François Sillion

 
CNES
Technical and Digital Director

> Biography

An alumnus of the École Normale Supérieure and holder of a doctorate from the Université Paris-Sud and a habilitation to direct research from the Université Joseph Fourier-Grenoble 1, he has held research positions at Cornell University (USA), CNRS (in Paris and Grenoble) and Inria. He has also been a visiting researcher at MIT (USA) and Microsoft Research (USA), and a lecturer at Ecole Polytechnique. He graduated from the Institut des hautes études pour la science et la technologie in 2008-2009 (promotion Hubert Curien).

His research has focused on techniques for creating computer-generated images: 3D models, "expressive" rendering, lighting simulation; visualization of very large volumes of data and data acquisition from real images. He has supervised fifteen theses and published around a hundred research articles on these subjects, as well as two books on lighting simulation and digital appearance modeling. In 2009, he received the Outstanding Technical Contribution Award from the international Eurographics association.

At Inria, he founded and managed the ARTIS project-team, and held the positions of Scientific Delegate, then Director of the Inria Rhône-Alpes research center. Now Inria's Deputy Managing Director for Science, he has notably developed support mechanisms for exploratory research projects and cross-disciplinary or interdisciplinary programs in support of the institute's scientific strategy, and coordinated the research component of the government plan on artificial intelligence put in place following the Villani report of March 2018.

In 2019, he founded the "Uber Advanced Technologies Center" research center in Paris, dedicated to digital technologies for new mobilities and in particular urban air mobility, which he will run until 2021.

In 2022, he will become Technical and Digital Director of CNES.

 

Guest lectures

Marouane Il Idrissi

 Marouane Il Idrissi

 
EDF R&D and Institut de Mathématiques de Toulouse.
Doctoral student

> Presentation

English presentation 🇬🇧

Proportional marginal effects: a response to the shortcomings of Shapley values for quantifying importance.

The use of Shapley values in the context of uncertainty quantification provides a practical solution when inputs (or features) are not independent. This marriage between cooperative game theory and global sensitivity analysis has led to the emergence of Shapley effects, interpretable indices for quantifying the importance attached to the inputs of a black-box model. However, these indices have one drawback: an exogenous variable (i.e., one that does not appear in the model) can be given importance, provided it correlates with endogenous inputs (i.e., in the model). The use of another allocation, proportional values, alleviates this problem. In this presentation, we will focus on the interface between cooperative game theory and uncertainty quantification. The differences between Shapley effects and proportional marginal effects will be illustrated using toy cases and real data sets.

 

> Biography

Marouane IL IDRISSI is in his 3rd year of a CIFRE PhD between EDF R&D and the Institut de Mathématiques de Toulouse. His thesis focuses on the development of methods for interpreting black-box artificial intelligence (AI) models, in order to certify their use for critical systems. His research focuses on explainable AI (XAI) and sensitivity analysis, using cooperative game theory and optimal transport.

Gabriel Laberge

Gabriel Laberge

 
Polytechnique Montréal.
Doctoral student

> Presentation

English presentation 🇬🇧

Hybrid Interpretable Models: Exploring the Trade-off between Transparency and Performance.

Research into the explainability of machine learning models is currently divided into two families:
1) the development of simple, interpretable models
2) the invention of "post-hoc" methods to explain complex models.
This duality is often presented as a consequence of the "transparency-performance" trade-off, which assumes that interpretable models systematically perform less well than opaque models. Although such a trade-off is part of explainability folklore, few studies have quantitatively justified it. So the question arises: does this trade-off really exist? If so, can we measure it? Or even optimize it?

To answer these questions, we study Hybrid Interpretable Models involving cooperation between a black box and a transparent model.
When predicting on an input x, a gate sends the input x either to the transparent part or to the opaque model.
By defining model transparency as the ratio of instances x that are sent to the interpretable component, we can quantitatively measure the trade-off between performance and transparency.

Despite their great potential, hybrid models are currently limited by their learning algorithm. Indeed, the state of the art relies on search heuristics that make learning sub-optimal and unstable. Based on the CORELS algorithm for learning optimal RuleLists, we advance the state of the art by training optimal hybrid models with fixed transparency. Our method, named HybridCORELS, thus enables us to study the trade-offs between transparency and performance without having to worry about model optimality and stability.

A key result of our experiments is that it is possible to obtain optimal hybrid models with more than 50% transparency, whose performance is equivalent to (or even better than) the original black box. These observations demonstrate an interesting observation: black boxes, while performing well, are often too complex in certain regions of the input space, and it is possible to simplify them in these regions without affecting performance.

 

> Biography

Gabriel Laberge studied Engineering Physics, followed by a Master's degree in Applied Mathematics, where he discovered his passion for data analysis and statistics. To deepen this passion, he began his PhD in Computer Science, focusing on machine learning.
His current research interests lie at the intersection of black-box explainability and uncertainty quantification, and aim to extract reliable conclusions about the behavior of complex models.

François Bachoc

François Bachoc

 
Toulouse Mathematics Institute
Senior Lecturer

> Presentation

English presentation 🇬🇧

Introduction to Gaussian process with inequality constraints, Application to coast flooding risk.

In Gaussian process modeling, inequality constraints enable to take expert knowledge into account and thus to improve prediction and uncertainty quantification. Typical examples are when a black-box function is bounded or monotonic with respect to some of its input variables. We will show how inequality constraints impact the Gaussian process model, the computation of its posterior distribution and the estimation of its covariance parameters. An example will be presented, where a numerical flooding model is monotonic with respect to two input variables called tide and surge.

 

> Biography

François Bachoc defended his thesis at Paris Diderot University in 2013. He then completed a 2-year post-doctorate at the University of Vienna. Since 2015, he has been a lecturer at the Institut de Mathématiques de Toulouse. He defended his HDR in 2018. His research topics are theoretical and applied statistics, machine learning, and industrial applications.

Nicolas Couëllan

Nicolas Couëllan

 
National Civil Aviation School
Professor

> Presentation

Optimization and machine learning for air traffic systems: towards more robust decisions.

Air traffic management systems are often complex and critical. Recent advances in machine learning mean that more contextual information can be incorporated into their calculations, making the decisions taken by these systems richer. Nevertheless, because they are critical, it is crucial to be able to guarantee their robustness. In the first part of this talk, we will briefly present some applications of machine learning in these systems. Secondly, we will address the issue of robustness of deep learning algorithms through some current research directions. In all this work, we will also see that optimization is at the heart of the techniques, either because it is coupled with machine learning techniques or because it is used as a mathematical model and method for solving the learning problem.

 

> Biography

Nicolas Couëllan is a Professor at the École Nationale de l'Aviation Civile (ENAC). He is also head of the Artificial Intelligence research axis of the ENAC laboratory and a research associate at the Institut de Mathématique de Toulouse. His research interests lie at the crossroads of mathematical optimization and machine learning. He holds an Habilitation à Diriger des Recherches from the Université Paul Sabatier, a PhD and a Master of Science from the University of Oklahoma, USA. He also worked for several years in the paper and automotive industries as an optimization consultant.

Sébastien Gada

Sébastien Gadat

 
Toulouse School of Economics
Professor of applied mathematics

> Presentation

Optimization and uncertainty: from the hazard suffered to the hazard used

We will present two examples of optimization problems in a context disturbed by a hazard. In the first example, we will study an aircraft trajectory optimization algorithm when subjected to a context where its spatial evolution is perturbed by various sources of noise: winds, air traffic control, etc. We will show how it is possible to make well-known global optimization methods, such as the recombination algorithm, robust to these hazards. We will show how well-known global optimization methods such as the simulated annealing algorithm can be made robust to these hazards. In a second step, we will take the view that it is possible to act and incorporate noise into an algorithm to also obtain convergence guarantees, and consider the examples of stochastic gradient and the ADAM method commonly used for deep neural network optimization.

 

> Biography

Professor of Applied Mathematics at the Toulouse School of Economics, his research focuses on the mathematics involved in artificial intelligence. He is mainly interested in questions relating to the efficiency of algorithms used in machine learning, both from an optimization and a statistical point of view. He has supervised a number of theses on these issues, both academic and industrial (CIFRE).

Mark Niklas Müller

Mark Niklas Müller

 
Swiss Federal Institute of Technology Zurich - ETHZ
Doctoral student

> Presentation

English presentation 🇬🇧

Realistic Neural Networks with Guarantees

Following the discovery of adversarial examples, provable robustness guarantees for neural networks have received increasing attention from the research community.
While relatively small or heavily regularized models with limited standard accuracy can now be efficiently analyzed, obtaining guarantees for more accurate models remains an open problem. Recently, a new verification paradigm has emerged that tackles this challenge by combining a Branch-and-Bound approach with precise multi-neuron constraints. The resulting, more precise verifiers have in turn enabled novel certified training methods which reduce (over-)regularization to obtain more precise yet certifiable networks. In this talk, we discuss these certification and training methods.

 

> Biography

Mark Niklas Müller is a Ph.D. student at the Secure, Reliable, and Intelligent Systems Lab at ETH Zurich and advised by Prof. Martin Vechev. Mark's research focuses on provable guarantees for machine learning models, including both certified training methods as well as deterministic and probabilistic certification methods for a diverse range of neural architectures.

François-Xavier Briol

François-Xavier Briol

 
University College London
Senior Lecturer

> Presentation

English presentation 🇬🇧

Uncertainty in Numerics: a multi-level approach

Multilevel Monte Carlo is a key tool for approximating integrals involving expensive scientific models. The idea is to use approximations of the integrand to construct an estimator with improved accuracy over classical Monte Carlo. We propose to further enhance multilevel Monte Carlo through Bayesian surrogate models of the integrand, focusing on Gaussian process models and the associated Bayesian quadrature estimators. We show, using both theory and numerical experiments, that our approach can lead to significant improvements in accuracy when the integrand is expensive and smooth, and when the dimensionality is small or moderate. In addition, our approach allows for the quantification of the uncertainty arising due to our finite computational budget in a Bayesian fashion. We conclude the paper with a case study illustrating the potential impact of our method in landslide-generated tsunami modelling, where the cost of each integrand evaluation is typically too large for operational settings.

 

> Biography

François-Xavier is a Lecturer (equivalent to Assistant Professor) in the Department of Statistical Science at University College London. he is also a Group Leader at The Alan Turing Institute, the UK's national institute for Data Science and AI, where he is affiliated to the Data-Centric Engineering programme. There, he leads research on the Fundamentals of Statistical Machine Learning.

Shahaf Bassan

Shahaf Bassan

 
Hebrew University of Jerusalem
Doctoral student in computer science

> Presentation

English presentation 🇬🇧

Towards Formally verifying and explaining deep neural networks

The talk will cover the general topic of the formal verification of neural networks. DNN verification can be used for assessing trust in neural networks deployed in safety-critical systems.
DNN verification tools can be used to verify a wide range of properties, such as robustness and explainability. In this context, the talk will dive in-depth into
how DNN verification can be used for providing formal and provable explanations for the behavior of neural networks. This is in contrast to most existing AI explainability tools used today,
which tend to be heuristic, hence not providing formal guarantees on the provided explanation.

 

> Biography

Shahaf Bassan is a Ph.D. student in Computer Science at the Hebrew University of Jerusalem. His research focuses on developing techniques to produce formal and provable methods for AI explainability (XAI), and more specifically DNN explainability. These often rely on the use of DNN verification techniques. His work focuses both on the theoretical foundations of AI explainability, as well as building practical techniques that could be used in practice, for fields such as computer vision, natural language processing, and robotic navigation.

Andrei Bursuc

Andrei Bursuc

 
Valeo.ai
Researcher

> Presentation

English presentation 🇬🇧

The many faces of reliability of visual perception for autonomous driving.

In this talk we study the reliability of automatic visual perceptions models applied to the task of autonomous driving.We first outline the challenges faced by such systems in the real world along with shortcomings of existing approaches based on deep neural networks.We then analyze two strategies to improve the robustness of visual perceptions systems to distribution shifts (unseen weather conditions, sensor degradation) by devising a robust training architecture (using the StyleLess layers) and through a carefully designed data augmentation strategy for removing potentially spurious correlations between objects in the datasets (Superpixel-Mix).Thirdly we discuss a learning-based monitoring approach for semantic segmentation and a strategy to generate failure modes and negative samples to learn from and to effectively recognize errors and out-of-distribution objects at runtime.
We conclude the talk with a brief review of current trends and perspectives in the community towards increasingly robust perception models.

 

> Biography

Andrei Bursuc is a Senior Research Scientist at valeo.ai and Inria (Astra team) in Paris, France. He completed his Ph.D. at Mines Paris in 2012. He was postdoc researcher at Inria Rennes (LinkMedia team) and Paris (Willow team). In 2016, he moved to industry at SafranTech to pursue research on autonomous systems. His current research interests concern computer vision and deep learning, in particular reliability of deep neural networks and learning with limited supervision. Andrei is teaching at Ecole Normale Supérieure and Ecole Polytechnique and organized several tutorials on self-supervised learning and reliability at CVPR, ECCV and ICCV.

Nicolas Brunel

Nicolas Brunel

 
Professor at ENSIIE and Scientific Director of Quantmetry 

> Presentation

English presentation 🇬🇧

Explicability of Machine Learning: moving from local to regional to improve estimation and understanding of Black-Box models.

To understand the predictions of a Machine Learning model, it is now common to rely on local explicability methods to complement classical indicators such as variable importances. In the case of tabular data, for example, Shapley values are calculated to measure the importance of a variable in the f(x) prediction made for a given x observation. Despite the practical interest of these measures, their limitations are becoming well understood, notably their numerical and statistical instabilities. We introduce regional measures of variable importance, i.e. an adaptive partitioning of the feature space, highlighting the most important variables region by region. This construction is based on the notion of "Same Decision Probability", which identifies the most important variables for "fixing" a predicted value. We also show how a dual approach can be used to construct counterfactual rules. These results are based on the use of "Random Forests", which provide efficient algorithms and guarantee certain mathematical properties of the indicators obtained. The work presented here was developed as part of Salim Amoukou's PhD thesis at Stellantis.

 

> Biography

An alumnus of ENSAI, N. Brunel obtained a PhD in statistics from Université Paris 6 in 2005, working on the statistical processing of radar signals, in collaboration with Thales Air Defence and Telecom SudParis. In 2007, he became a lecturer at ENSIIE (Ecole Nationale Supérieure d'Informatique pour l'Industrie et l'Entreprise), working on the statistical estimation of differential equations, in particular to promote the use of inherently explicable models in biology. Promoted to Professor at ENSIIE in 2018, Nicolas Brunel will also become Quantmetry's Scientific Director from 2020. There, he leads Quantlab's AI R&D work, and in particular develops Trusted AI, notably the explicability of "black-box" models, and the quantification of uncertainty in Machine Learning through conformal prediction methods and the development of an open source MAPIE library.

Jean Christophe Pesquet

Jean-Christophe Pesquet

 
Professor
Centrale Supélec / Inria

> Presentation

Fixed point properties of neural networks.

Fixed-point strategies offer a simplifying and unifying framework for modeling, analyzing and solving a large number of problems arising in Data Science. They provide a natural context for studying the behavior of advanced optimization methods. Today, a growing number of problems go beyond the realm of optimization, since their solutions do not minimize a cost function, but satisfy more general equilibrium properties. This talk will provide an introduction to fixed-point methods and show through examples how these tools provide answers to various neural network problems.

 

> Biography

Jean-Christophe Pesquet (IEEE Fellow 2012, EURASIP Fellow 2021) received the engineering degree from Supélec, Gif-sur-Yvette,
France, in 1987, the Ph.D. and HDR degrees from the University Paris-Sud in 1990 and 1999, respectively. From 1991 to 1999, he was an Assistant Professor at the University Paris-Sud, and a research scientist at the Laboratoire des Signaux et Systèmes (CNRS). From 1999 to 2016, he was a Professor with University Paris-Est Marne-la-Vallée and from 2012 to 2016, he was the Deputy Director of the Laboratoire d'Informatique of the university (UMR-CNRS 8049). He is currently a Distinguished Professor with CentraleSupelec, University Paris-Saclay, and the director of the Center for Visual Computing (OPIS Inria group). He was also a senior member of the Institut Universitaire de France from 2016 to 2021.
He is the recipient of the ANR AI Chair BRIGDEABLE. His research interests include multiscale analysis, statistical signal processing, inverse problems, imaging, and optimization methods with applications to data sciences and artificial intelligence.