Guillaume SOUDAIN

 
EASA European Aviation Safety Agency
Senior Expert Software

Guillaume Soudain has worked for the European Aviation Safety Agency's (EASA) Certification Directorate since 2006, as a software and airborne electronics expert. In 2014, he was appointed Senior Software Expert, and since then has been responsible for coordinating the software aspects of certification within the Agency. Guillaume was a member of the joint EUROCAE WG-71/RTCA SC-205 committee responsible for producing the ED-12C/DO-178C software standard and its associated documents. He is currently leading the EASA AI project team to implement the EASA AI roadmap. He is also a member of the joint EUROCAE WG-114/SAE G-34 working group on artificial intelligence.

Link: https://www.linkedin.com/in/guillaume-soudain-12a7a613a

"EASA guidelines on AI reliability

Abstract: 

Machine and deep learning are opening up promising prospects in aviation, as in many other fields. However, it raises the crucial question of how much confidence can be placed in these techniques when used in safety-critical applications, and how compatible they are with stringent certification requirements.
EASA has published on "Initial usable guidance for Level 1 machine learning applications" in April 2021, with a view to anticipating future EASA guidance and requirements for these applications. In this talk, Guillaume Soudain will recall the main elements of EASA's AI roadmap, present the key elements of this initial guidance and outline a number of remaining AI reliability challenges.

Olivier TEYTAUD

 
Facebook AI Research
Research scientist

Olivier Teytaud began working in artificial intelligence in the last century.
After work in statistics and neural networks, he divided his time between highly applicative work on electrical systems, transport and games, and more theoretical work in optimization and control. He contributes to the Nevergrad platform for system optimization (we'd be delighted to include your test cases!) and to the Polygames platform (neural networks for games).
After working in Russia, at Inria in France and in Taiwan, he joined Google and now Facebook AI Research in France.

 

"AI for games, a step towards mission-critical applications"

Abstract:

An intelligent system typically comprises static parameters (shape, volume, regulation) and dynamic parameters (real-time control). In this sense, they are akin to games: openings (think chess or naval battle), then game play.

We present two open source tools for this purpose:

  • Nevergrad, already widely used in industry. Nevergrad enables direct font search, for robust solutions in noisy environments.
  • Polygames, which enabled victories against humans in games where humans still resisted.

Pierre OLIVIER

 
LeddarTech
Chief Technology Officer

With the company for several years, Pierre Olivier was promoted in 2017 to the key position of Chief Technology Officer. He previously held the role of Vice President, Engineering and Production. In his new role, he is responsible for setting the strategic direction of technology for the company, developing the organization's technological competencies and providing operational support for information technology. Recognized as much for his technical know-how as for his visionary side, Pierre has many years' experience in the development of high-tech products.

Right out of university in 1991, he joined the CML Technologies team, where his ability to lead projects pragmatically and efficiently was soon apparent. He quickly rose through the ranks while developing his management skills. A designer of innovative, flexible, practical and robust products used worldwide in strategic applications such as emergency dispatch and air traffic control, he holds several patents to his credit. He continued his career at Adept Technology Canada and DAP Technologies before joining the LeddarTech team in 2010. Pierre holds a degree in electrical engineering from Université Laval and is a member of the Ordre des ingénieurs du Québec.

 

"AI and datasets for driver assistance and autonomous driving"

Abstract:

AI is now essential in driver assistance and autonomous driving systems. The training and evaluation of the perception algorithms used depend largely on the availability of datasets. We will discuss existing datasets, the sensors used, data collection and 3D annotation requirements. We will end with a presentation of the PixSet™ public dataset.

 

Chantelle DUBOIS

 
Avionics & Software Systems Engineer
Canadian Space Agency

Chantelle Dubois is a computer engineer with the Canadian Space Agency. Her main role is avionics and software systems engineer for the Lunar Gateway program, facilitating the delivery of Canadarm3. She also supports the Lunar Exploration Acceleration Program (LEAP), providing insight and support for robotics software. During her undergraduate studies, Chantelle completed three summer internships at the Agency working on Lunar Exploration Analog Deployment (LEAD), integrating software for a prototype rover used in field tests to study the concept of operations for a lunar sample collection mission.

"Artificial Intelligence: Advancing the future of space exploration & space utilization"

Abstract: 

Canadarm3 is Canada's contribution to the NASA-led Lunar Gateway mission, and will be part of an overall effort to make the Lunar Gateway autonomous, managing as much of the Gateway's configuration, maintenance and inspection as possible without intervention by the crew or ground operator. To this end, Canadarm3 will be designed to incorporate increasingly complex capabilities powered by artificial intelligence over its lifetime. This presentation will discuss the AI and autonomy roadmap envisioned for Canadarm3, how this system will contribute to an autonomous space station, and a brief, non-exhaustive summary of how AI is being used elsewhere within the agency.

Juliette MATTIOLI

 
Thales
Senior Artificial Intelligence Expert

Juliette Mattioli began her industrial career in 1990, with a thesis on pattern recognition using mathematical morphology and neural networks at Thomson-CSF. In 1993, she became a research engineer. As she has progressed through the various R&D laboratories she has managed, she has extended her spectrum of skills from image processing to semantic information fusion, from decision support to combinatorial optimization. Her presence, both on conference program committees and national bodies (mission #FranceIA, plan "AI 2021" for IdF, Hub "Data Science & AI" of the Systematic Paris-Région cluster) or international (G7 of innovators in 2017) also shows her intention to share her knowledge and participate in the emancipation of corporate research. Since 2010, she has been attached to Thales's technical management to help define the company's research and innovation strategy, for the algorithmic field with a particular focus on trusted AI, but also on algorithmic engineering to accelerate the industrial deployment of AI-based solutions.

 

"When AI comes on board

Artificial intelligence (AI) continues to gain ground in the transportation industry. These include driving assistance systems and autonomous vehicles (automotive, rail and air). More discreetly, AI is also being used to manage vehicle fleets, optimize maintenance costs, anticipate risks in relation to road hazards or goods being transported, reduce carbon footprints... But to design and deploy such AI-based solutions, onboardability, safety and cyber-security requirements need to be taken into account.

Patrick PEREZ

 
Valeo
Vice-President, Scientific Affairs

Patrick Pérez is Scientific Director of valeo.ai, an AI research laboratory focused on Valeo's automotive applications, in particular self-driving cars. Before joining Valeo, Patrick Pérez was a researcher at Technicolor (2009-2018), Inria (1993-2000, 2004-2009) and Microsoft Research Cambridge (2000-2004). His research focuses on multimodal scene understanding and digital imaging.

 

"Some Machine Learning challenges for autonomous driving."

Abstract: 

Assisted and autonomous vehicles are safety-critical systems that have to cope in real time with complex, difficult-to-predict and dynamic environments. Training (and testing) the underlying models requires massive amounts of fully annotated driving data, which is unsustainable. Focusing on perception, several valeo.ai projects aimed at training better models with limited supervision will be presented. These include unsupervised domain adaptation, trust-based pseudolabeling, zero-image recognition and GAN-based training data augmentation for key tasks such as semantic scene segmentation and instance-level object detection.

 

Foutse KHOMH

 
Polytechnique Montréal
Teacher

Foutse Khomh is a full professor of software engineering at Polytechnique Montréal and holds the FRQ-IVADO research chair in software quality assurance for machine learning applications. D. in software engineering from the Université de Montréal in 2011, with the Prix d'excellence. He also received the CS-Can/Info-Can Award for Outstanding Young Researcher in Computer Science in 2019. His research interests include software maintenance and evolution, machine learning systems engineering, cloud engineering, and reliable and trustworthy Machine learning/Artificial Intelligence. His work has been recognized with four decennial Most Influential Paper (MIP) awards and six Best Paper/Distinguished Paper awards. He initiated and co-organized the SEMLA (Software Engineering for Machine Learning Applications) symposium and the RELENG (Release Engineering) workshop series. He is on the editorial board of several international software engineering journals and is a senior member of the IEEE. He is also an academic associate member of Mila - Institut québécois d'intelligence artificielle.

Link: http://khomh.net/

"Quality Assurance for Machine Learning-Based Software Systems".

ABSTRACT: 

Software systems based on machine learning are increasingly used in various industries, including critical ones. Their reliability is now a key issue. The traditional approach to software development is deductive, consisting in writing rules that dictate the system's behavior with a coded program. In machine learning, these rules are inferred inductively from training data. This makes it difficult to understand and predict the behavior of software components, and therefore also to verify them adequately. Compared with traditional software, the dimensions of the test space of a software system incorporating machine learning are much larger.
In this presentation, I will talk about the program analysis tools we have developed to enable fault localization and correction in machine learning-based software systems. I will also present some of the test generation techniques we have developed.

Liam PAULL

 
University of Montreal
Adjunct Professor

Liam Paull eis Assistant Professor at the Université de Montréal and Director of the Laboratoire de robotique et d'intelligence artificielle incarnée de Montréal (REAL). His lab focuses on problems in robotics, including building representations of the world (such as for simultaneous localization and mapping), modeling uncertainty and building better workflows for teaching robotic agents new tasks (such as through simulation or demonstration). Previously, Liam was a researcher at CSAIL MIT where he led the TRI-funded autonomous car project. He also has a post-doc from MIT's Marine Robotics Laboratory, where he worked on SLAM for underwater robots. He received his PhD from the University of New Brunswick in 2013, where he worked on robust and adaptive planning for underwater vehicles. He is co-founder and director of the Duckietown Foundation, dedicated to making engaging robotic learning experiences accessible to all. The Duckietown classroom was initially taught at MIT, but the platform is now used in many institutions around the world.

Link : liampaull.ca

 

"Quantifying uncertainty in Deep Learning-based perception systems."

Abstract:

A prerequisite for integrating a perception system into an autonomous system is that it can report some calibrated and accurate measure of the confidence associated with its measurements. This is crucial for downstream tasks such as sensor fusion and planning. This is a challenge for perception systems using Deep Learning, as uncertainty can come from a variety of sources. In this talk, we will cover the different types of uncertainty that arise in Deep Learning-based perception systems and discuss common methods for quantifying and calibrating uncertainty measures. We will focus specifically on the application of object detection for autonomous driving. In this context, we will describe our recently proposed method, f-Cal, which explicitly enforces calibration.

Xavier PERROTTON

 
Valeo AI / Autonomous Driving
Manager Departement R&I & Senior Expert

15 years' experience in Artificial Intelligence, Computer Vision, Research & Innovation. Xavier Perrotton is Head of Research & Innovation and Senior Expert in Artificial Intelligence at Valeo Driving Assistance Research. Based in Paris, his main mission is to lead the research teams in pushing back the boundaries of artificial intelligence to make intelligent mobility an everyday reality. Before joining Valeo, he was a researcher and then research project manager at Airbus group innovations, working on computer vision and augmented reality, turning ideas into products.

Specialties: Artificial intelligence, computer vision, machine learning, object recognition, 3D, augmented reality, autonomous driving.

 

Mélanie DUCOFFE

 
Airbus
Researcher in Machine Learning 

Mélanie Ducoffe has been an industrial researcher at the Airbus Research and Technology Center since 2019, seconded part-time to the DEEL project for the study of robustness in machine learning and its applications to critical systems. Before joining Toulouse, she validated her master's studies with an internship on generative learning with Yoshua Bengio, then completed a PhD in machine learning at the CNRS in Nice Sophia Antipolis on active learning in deep neural networks. His main current research activities focus on the robustness of neural networks, in particular using formal methods.

 

"Probabilistic guarantees for surrogate models

Abstract:

The integration of simulation models developed during the design of a platform opens up new functionalities, but is generally very costly in terms of computing power and hardware constraints. Substitution models are an effective alternative, but require additional certification to guarantee their security. In this paper, we study the security of a black-box surrogate model (e.g. a neural network) that needs to over-approximate a black-box reference model. We derive Bernstein-type deviation inequalities to prove high-probability safety bounds on the substitution model and on shifted versions of it. We demonstrate its relevance in an industrial use case. We predict the worst-case braking distance of an aircraft and show how to provably over-approximate the predictions of an already-qualified prediction model.

Sébastien GERCHINOVITZ

 
IRT Saint Exupéry / ANITI / Institut de Mathématiques de Toulouse
Researcher

Sébastien Gerchinovitz is a researcher at IRT Saint Exupéry, working in the DEEL project on machine learning theory and its applications to mission-critical systems. He is also a research associate at the Institut de Mathématiques de Toulouse, and a member of the "Game Theory and Artificial Intelligence" chair at the ANITI institute. After obtaining a PhD in mathematics at the Ecole Normale Supérieure in Paris, he was a lecturer at the Université Toulouse III - Paul Sabatier from 2012 to 2019, where he is currently on secondment. His main research topics are machine learning theory, sequential learning and deep learning.

Link: http://www.math.univ-toulouse.fr/~sgerchin/

 

"Probabilistic guarantees for surrogate models

Abstract:

The integration of simulation models developed during the design of a platform opens up new functionalities, but is generally very costly in terms of computing power and hardware constraints. Substitution models are an effective alternative, but require additional certification to guarantee their security. In this paper, we study the security of a black-box surrogate model (e.g. a neural network) that needs to over-approximate a black-box reference model. We derive Bernstein-type deviation inequalities to prove high-probability safety bounds on the substitution model and on shifted versions of it. We demonstrate its relevance in an industrial use case. We predict the worst-case braking distance of an aircraft and show how to provably over-approximate the predictions of an already-qualified prediction model.

Serge GRATTON

 
INP Toulouse / IRIT / ANITI
Professor

Serge Gratton is Professor of Applied Mathematics at the Institut National Polytechnique de Toulouse. He has been coordinating the "Parallel Algorithms and Optimization" team at the Institut de Recherche en Informatique de Toulouse since 2017, where he develops his research activities around large-scale optimization and data assimilation algorithms.
Since 2019, he has held the "Data Assimilation and Machine Learning" Chair at the Toulouse Institute of Natural and Artificial Intelligence (ANITI), in which he explores techniques for incorporating physical constraints into machine learning algorithms. These activities have applications in sectors such as aeronautics and space, or the environment.

 

Sylvaine PICARD

 
Safran Electronics & Defense
Chief IA Engineer

After starting her career in the world of SMEs, in 2004 she joined Morpho (now Idemia), a Safran subsidiary developing cutting-edge biometric security technologies. With her team, she developed the world's first contactless biometric recognition sensor. People will be able to identify themselves simply by passing their fingers in front of the sensor. This innovation won her the Trophée des Femmes de l'industrie in 2012, in the Woman of Innovation category. This prize is awarded to a woman behind "a spectacular innovation, shaking up the industry's usual practices".
A few years later, her career took another turn when she joined Safran Tech, the Safran group's corporate R&T center. She became head of a research team in image processing and artificial intelligence. With her colleagues, she develops algorithms capable of controlling the quality of aeronautical parts. She is also working on training algorithms for autonomous land and air vehicles. She plays an active role in the DEEL project's certification mission, and is a member of the Safran team working on the definition of the Confiance.AI program.
She recently became Chief AI Engineer at Safran Electronics and Defense.

Franck MAMALET

 
IRT Saint Exupéry
Technical Leader in Artificial Intelligence

Franck Mamalet has been an AI expert at IRT Saint Exupéry since 2018, working mainly in the DEEL project on machine learning theory and its applications to mission-critical systems. He is also co-leader of the project's certification mission aimed at bringing together specialists in Certification, critical embedded systems, and machine learning. Before joining the IRT, from 1997 to 2006 he was a researcher at France Telecom R&D on embedded systems for image processing, and from 2006 to 2015 he worked at Orange Labs on convolutional and recurrent neural networks for image and video indexing. In 2015 he took over as head of R&D at Brainchip/Spikenet, a startup specializing in Spikes neural networks. His current research focuses on the robustness and quantification of neural networks, and building the link with Machine Learning certification.

"DEEL - Certification Mission

Abstract:

The Certification Mission is a workgroup within the DEEL project, bringing together some twenty experts in the fields of certification, operational safety, critical embedded systems development and Machine Learning. The industrial and academic members of the workgroup come from the aeronautics, rail, automotive and energy sectors. Some members of the group are involved in other institutions or projects on AI certification, such as AVSI (Aerospace Vehicle Systems Institute), SOTIF (Safety Of The Intended Functionality), and WG-114 of EUROCAE (European Organisation for Civil Aviation Equipment). This group, with its proximity to researchers from the DEEL project, aims to bridge the gap between research into robust and explainable Machine Learning, and certification for mission-critical systems. We will be presenting the initial results of this group, in particular the recently published White Paper "Machine Learning in Certified Systems"(https://hal.archives-ouvertes.fr/hal-03176080).

Edouard PAUWELS

 
Toulouse 3 - Paul Sabatier University
Senior Lecturer

Edouard Pauwels is a lecturer at Toulouse 3 - Paul Sabatier University. He works at the Institut de Recherche en Informatique de Toulouse (IRIT) in the argumentation, decision, reasoning, uncertainty and learning team. His research focuses on numerical optimization, its applications in artificial intelligence and the guarantees it can provide.

 

" Towards a model of algorithmic differentiation for AI ".

Abstract: 

The operation of deriving a given numerical function in the form of a program is one of the central components of modern AI, particularly in deep learning. In this context, algorithmic differentiation is widely and routinely used outside its domain of validity. The presentation will introduce a model to describe and illustrate the artifacts that mar this mode of computation, and discuss the qualitative consequences for learning algorithms.

 

Jean-Michel Loubes

 
ITM / University Toulouse 3
Paul Sabatier
Professor

Jean-Michel Loubes is Professor of Applied Mathematics at the Institut de Mathématiques de Toulouse, and holder of the "Fair and Robust Learning" Chair at the Institut d'Intelligence Naturelle et Artificielle de Toulouse (ANITI). After completing his PhD in Toulouse and Leiden, he was a CNRS researcher at the University of Orsay and then at the University of Montpellier from 2001 to 2007. Since 2007, he has been a professor at the University of Toulouse, leading the Statistics and Probability team from 2008 to 2012. He has worked in mathematical statistics on estimation methods and optimal speeds in Machine Learning. His current research focuses on the application of optimal transport theory in Machine Learning, and on issues of fairness and robustness in Artificial Intelligence. Highly involved in links between academia and industry, he was regional manager of the CNRS Agence de Valorisation des Mathématiques from 2012 to 2017, and has been a member of the Scientific Committee of the CNRS Institut des Sciences Mathématiques et de leur Interaction (INSMI) since 2019.

Mario MARCHAND

 
Laval University
Professor

Mario Marchand is a professor in the Department of Computer Science and Software Engineering at Université Laval. Working in the field of machine learning for over 30 years, he first became interested in neural networks and then in performance guarantees for learning algorithms. Over the years, he has proposed several learning algorithms optimizing performance guarantees, such as set covering machines and decision list machines, which produce interpretable models while performing a form of data compression. He has also proposed kernel methods and boosting algorithms that optimize PAC-Bayesian guarantees. Some of these algorithms have been used to predict the type of co-receptor used by the HIV virus and to predict antibiotic resistance in bacteria from their genome.

"The challenges of learning interpretable predictive models"

Abstract: 

A predictive model is interpretable if it can be examined, analyzed and criticized by a human expert. Thus, it is interpretability that allows us to increase or decrease our confidence in a model, an essential quality for its use in situations with a potential impact on human health, safety and well-being. Inspired by the so-called "performance-interpretability trade-off", the main direction of research seems to be the learning of opaque performance models, followed by a posteriori explanation at the level of individual instances. However, since interpretable models can also perform well, I will first review the main difficulties involved in efficiently learning transparent models such as decision trees, Boolean DNF formulas and decision lists. I will then propose three research directions that would allow us to potentially circumvent these difficulties, specifically: 1) learning stochastic interpretable models, 2) minimizing the local risk of interpretable models, 3) learning interpretable expert mixtures.

François Laviolette

 
Laval University
Director of the Center for Massive Data Research

François Laviolette isa full professor in the Department of Computer Science and Software Engineering at Université Laval. His research focuses on artificial intelligence, in particular machine learning. A leader in PAC-Bayesian theory, which helps us to better understand machine learning algorithms and design new ones, he is interested in algorithms for solving learning problems related to genomics, proteomics and drug discovery. He is also interested in making artificial intelligences interpretable, with the aim of better integrating systems where humans are in the decision-making loop. He is Director of the Centre de recherche en données massives(CRDM) at Université Laval, which brings together over 50 research professors.

 

Louise Travé-Massuyès

 
Research Director - CNRS
Chair in Diagnostics - Aniti

Louise Travé-Massuyès is Director of Research at the Laboratoire d'Analyse et d'Architecture des Systèmes, Centre National de la Recherche Scientifique (LAAS-CNRS, https://www.laas.fr), Toulouse, France. She obtained a degree in control engineering from the Institut National des Sciences Appliquées (INSA), Toulouse, France, in 1982 and a PhD from INSA in 1984. Her main research interests are in the diagnosis and supervision of dynamic systems, with particular emphasis on qualitative and model-based reasoning methods and data mining. She has been particularly active in building bridges between the Artificial Intelligence and Automatics diagnostic communities. She is a member of the Safeprocess technical committee of the International Federation of Automatic Control IFAC (https://www.ifac-control.org/), treasurer of the Society of Automatic Control, Industrial and Production Engineering (https://www.sagip.org). She holds the "Synergistic transformations in model-based and data-based diagnosis" chair at ANITI, France (https://aniti.univ-toulouse.fr) and is associate editor of the renowned Artificial Intelligence Journal (https://www.journals.elsevier.com/artificial-intelligence).

"Diagnostics as the key to autonomy and reliability."

ABSTRACT: 

Diagnosis and status monitoring are critical tasks for autonomous systems, as they strongly influence the decisions that are made, and can be essential to the life of the system. They provide the means to identify faults and react to the various hazards that can affect the system, achieving the desired level of reliability. In this talk, I will present the role of diagnostics in autonomous systems and two case studies illustrating this point in the space domain.

Adrien GAUFFRIAU

 
Airbus
Engineer & Data Scientist specializing in critical systems

Adrien Gauffriau is a graduate of Ecole Centrale de Nantes. He holds a Master's degree from Supaéro in Critical Embedded Systems. He developed the Flight By Wires software for the Airbus A350 and explored the use of multi-core and many-core processors for mission-critical systems. He is currently focusing on the future of embedded artificial intelligence in systems for the transport industry.

Baptiste LEFEVRE

 
Thales Avionics
Advanced Technologies Regulatory Manager

Baptiste Lefevre is in charge of advanced technology regulation at Thales Avionics. He is in charge of developing solutions for the certification of Artificial Intelligence products. He is a member of various working groups on this subject, including the French DEEL research group, the EUROCAE WG-114 / SAE G-34 standardization working group, and the AVSI AFE-87 research group.
In addition to IA certification, Baptiste is also in charge of Thales' technical representation at ICAO level, and the implementation of Thales' safety management system.
Prior to joining Thales, Baptiste was in charge of quality and standardization at the French Civil Aviation Authority, where he ensured the consistent and proportionate implementation of regulations in all aviation sectors in France.