Guillaume SOUDAIN

 
EASA European Aviation Safety Agency
Senior Software Expert

Guillaume Soudain has been working since 2006 as an expert in airborne software and electronics in the Certification Directorate of the European Aviation Safety Agency (EASA). In 2014, he was appointed Senior Software Expert and since then he has been responsible for the coordination of software aspects of certification within the Agency. Guillaume was a member of the joint EUROCAE WG-71/RTCA SC-205 committee responsible for producing the ED-12C/DO-178C software standard and its associated documents. He is currently leading the EASA AI project team with the objective to implement the EASA AI roadmap. He is also a member of the joint EUROCAE WG-114/SAE G-34 working group on artificial intelligence.

Link: https://www.linkedin.com/in/guillaume-soudain-12a7a613a

” EASA Guidelines on AI trustworthiness “

Abstract: 

Machine and deep learning opens up promising prospects for aviation as in many other fields. However, it raises the crucial question of the level of confidence that can be placed in these techniques when used in safety-critical applications and their compatibility with strict certification requirements.
EASA has published its concept paper on ‘first usable guidance for Level 1 machine learning applications’ in April 2021, with a view to anticipate future EASA guidance and requirements for such applications. In this keynote, Guillaume Soudain will recall the main elements of the EASA AI Roadmap, present the key elements of this first guidance and outline a number of remaining challenges with respect to AI trustworthiness.

Olivier TEYTAUD

 
Facebook AI Research
Research scientist

Olivier Teytaud started working in artificial intelligence in the last century.
After work in statistics and neural networks, he divided his time between very application work around electrical systems, transport, games and more theoretical work in optimization and controls. He contributes to the Nevergrad platform, for system optimization (we would be delighted to include your test cases!) And to the Polygames platform (neural networks for games).
After working in Russia, at Inria in France and in Taiwan, he worked at Google and now at Facebook AI Research in France.

 

“AI for games, a step towards critical applications”

Abstract:

An intelligent system typically has static parameters (shape, volume, regulation) and dynamic parameters (real-time control). In this sense, they are similar to games: openings (think of chess or naval battle), and then game play.

Two open source tools are presented for this purpose:

  • Nevergrad, already widely used in the industry. Nevergrad allows direct font search, for robust solutions in noisy environments.
  • Polygames, has enabled victories against humans in games where humans still resisted.

Pierre OLIVIER

 
LeddarTech
chief technology officer

With the company for several years, Pierre Olivier was promoted in 2017 to the key position of Chief Technology Officer. He previously held the role of Vice President, Engineering and Production. In his new role, he is responsible for setting the strategic technology direction for the company, developing the organization's technology competencies and providing operational support for information technology. Recognized for both his technical expertise and visionary nature, Pierre has many years of experience in developing technology-intensive products.

Upon graduating from university in 1991, he joined the CML Technologies team where his ability to lead projects in a pragmatic and efficient manner was soon apparent. He quickly climbed the ladder while developing his management skills. He has developed innovative, flexible, practical and robust products used worldwide in strategic applications such as emergency dispatch and air traffic control, and holds several patents to his credit. He continued his career at Adept Technology Canada and DAP Technologies before joining the LeddarTech team in 2010. Pierre holds a degree in electrical engineering from Laval University and is a member of the Ordre des ingénieurs du Québec.

 

"AI and datasets for driver assistance and autonomous driving"

Abstract:

AI is now essential in driver assistance and autonomous driving systems. The training and evaluation of the perception algorithms used depend largely on the availability of data sets. We will discuss the existing datasets, the sensors used, the data collection, as well as the 3D annotation needs. We will end with the presentation of the public "dataset" PixSet™.

 

Chantelle DUBOIS

 
Avionics & software systems engineer
Avionics & Software Systems Engineer

Chantelle Dubois is a Computer Engineer at the Canadian Space Agency, primarily in the role of Avionics and Software Systems Engineer for the Lunar Gateway Program, facilitating the delivery of Canadarm3. She additionally supports the Lunar Exploration Accelerator Program (LEAP), providing robotic software insight and support. During her undergraduate studies, Chantelle completed XNUMX summer internships at the Agency to work on the Lunar Exploration Analogue Deployment (LEAD), integrating software for a prototype rover used in field tests to investigate the concept of operations for a lunar sample collection mission.

" AI : Advancing the Future of Space Exploration and Utilization ”

Abstract: 

Canadarm3 is Canada's contribution to the NASA lead Lunar Gateway mission and will be part of an overall effort of making the Lunar Gateway autonomous, managing as much configuration, maintenance, and inspection of the Gateway as possible without crew or ground operator intervention. To this end, Canadarm3 will be designed with the capacity to integrate increasingly complex capabilities powered by artificial intelligence over its lifetime. This presentation will discuss the AI and Autonomy Roadmap envisioned for Canadarm3, how this system will contribute to an autonomous space station, and a brief, non-exhaustive summary of how AI is being used elsewhere within the agency.

Juliette MATTIOLI

 
Thales
Senior Expert in Artificial Intelligence

Juliette Mattioli began his industrial career in 1990, for a thesis in pattern recognition by mathematical morphology and neural networks at Thomson-CSF. In 1993, she became a research engineer. Throughout her promotions, evolutions and mutations, she has, through the various R&D laboratories that she directed, extended her skills spectrum in the field of image processing to the fusion of semantic information, decision support for combinatorial optimization. Its presence, both in conference program committees, national bodies (#FranceIA mission, “AI 2021” plan for IdF, “Data Science & AI” Hub of the Systematic Paris-Region cluster) or international (G7 of innovators in 2017) also shows its intention to share its knowledge and participate in the emancipation of business research. Since 2010, she has been attached to the technical department of Thales to contribute to the definition of the research and innovation strategy, for the algorithmic domain with a particular focus on trusted AI but also on algorithmic engineering in order to accelerate the industrial deployment of AI-based solutions.

 

" When AI comes on board "

Artificial intelligence (AI) is constantly gaining ground in the transportation industry. We think of driving assistance capabilities or autonomous vehicles (automotive, rail or air). In a more discreet way, AI is also used in the management of vehicle fleets, the optimization of maintenance costs, the anticipation of risks based on road hazards or related to the goods transported, the reduction of carbon footprint... But to design and deploy such AI-based solutions, it is necessary to take into account the requirements of onboarding, safety and cyber security.

Patrick PEREZ

 
Valeo
Scientific Director

Patrick Perez is the Scientific Director of valeo.ai, an AI research lab focused on Valeo's automotive applications, in particular self-driving cars. Prior to joining Valeo, Patrick Pérez was a researcher at Technicolor (2009-2018), Inria (1993-2000, 2004-2009) and Microsoft Research Cambridge (2000-2004). His research focuses on multimodal scene understanding and digital imaging.

 

“Some challenges of machine learning for autonomous driving”

Abstract: 

Assisted and autonomous vehicles are safety-critical systems that have to cope in real-time with complex, hard-to-predict, dynamic environments. Training (and testing) the underlying models require massive amounts of fully-annotated driving data, which is not sustainable. Focusing on perception, several projects at valeo.ai toward training better models with limited supervision will be presented. They include unsupervised domain adaptation, confidence-based pseudo-labeling, zero-shot recognition and GAN-based training-data augmentation, for key tasks like semantic scene segmentation and instance-level object detection.

 

Foutse KHOMH

 
Polytechnique Montreal
Professor

Foutse Khomh Foutse Khomh is a full professor of software engineering at Polytechnique Montréal and holds the FRQ-IVADO research chair on software quality assurance for machine learning applications. He received his PhD in Software Engineering from the University of Montreal in 2011, with the Excellence Award. He also received the CS-Can/Info-Can Outstanding Young Researcher Award in Computer Science in 2019. His research includes software maintenance and evolution, machine learning systems engineering, cloud engineering, and reliable and trustworthy Machine learning/Artificial Intelligence. His work has been recognized with four decennial Most Influential Paper (MIP) awards and six Best Paper/Distinguished Paper awards. He initiated and co-organized the SEMLA (Software Engineering for Machine Learning Applications) symposium and the RELENG (Release Engineering) workshop series. He is on the editorial board of several international software engineering journals and is a senior member of the IEEE. He is also an academic associate member of Mila Institute.

Link: http://khomh.net/

“Quality Assurance of Software Systems Based on Machine Learning”

ABSTRACT: 

Software systems based on machine learning are increasingly used in various industries, including in critical areas. Their reliability is now a key issue. The traditional approach to software development is deductive, consisting in writing rules that dictate the behavior of the system with a coded program. In machine learning, these rules are rather inferred inductively from training data. This makes it difficult to understand and predict the behavior of software components, and thus also to adequately verify them. Compared to traditional software, the dimensions of the test space of a software system incorporating machine learning are much larger. In this talk, I will discuss the program analysis tools we have developed to enable fault localization and correction in machine learning based software systems. I will also present some test generation techniques that we have developed.
In this presentation, I will talk about the program analysis tools that we have developed to enable fault localization and correction in software systems based on machine learning. I will also present some test generation techniques that we have developed.

Liam PAULL

 
University of Montreal
Assistant Professor

Liam Paull eis assistant professor at the University of Montreal and director of the Laboratory of Robotics and Embodied Artificial Intelligence of Montreal (REAL). His lab focuses on robotics issues, including building representations of the world (such as with simultaneous localization and mapping), modeling uncertainty, and building better workflows to teach robotic agents new tasks. (as by simulation or demonstration). Previously, Liam was a researcher at CSAIL MIT where he led the self-driving car project funded by TRI. He also has a post-doctoral fellowship from the MIT Marine Robotics Lab, where he worked on SLAM for underwater robots. He received his PhD from the University of New Brunswick in 2013 where he worked on robust and adaptive planning for underwater vehicles. He is the co-founder and director of the Duckietown Foundation, which is dedicated to making engaging robotic learning experiences accessible to all. The Duckietown class was originally taught at MIT, but the platform is now in use in many institutions around the world.

Link: liampaull.ca

 

“Quantifying uncertainty in deep learning based perception systems”

Abstract:

A prerequisite for the integration of an perception system in an autonomous system is that it can report some calibrated and and accurate measure of confidence associated with its measurements. This is crucial for downstream tasks like sensor fusion and planning. This is challenging in perception systems that leverage deep learning because uncertainty can stem from different sources. In this talk, we will cover the different types of uncertainty that manifest in deep learning based perception systems and we will discuss common methods to quantify and calibrate uncertainty measures. We will focus specifically on the application of object detection for autonomous driving. In this context, we will describe our recently proposed method, f-Cal, which explicitly enforces calibration.

Xavier PERROTTON

 
Valeo AI / Autonomous Driving
R&I Department Manager & Senior Expert

15 years of experience in Artificial Intelligence, Computer Vision, Research & Innovation. Xavier Perrotton is Head of Research & Innovation and Senior Expert in Artificial Intelligence at Valeo Driving Assistance Research. Based in Paris, his main mission is to lead the research teams in pushing the boundaries of artificial intelligence to make intelligent mobility an everyday reality. Before joining Valeo, he was a researcher and then a research project manager at Airbus group innovations, working on computer vision and augmented reality, turning ideas into products.

Specialities: Artificial intelligence, computer vision, machine learning, object recognition, 3D, augmented reality, autonomous driving.

 

Melanie DUCOFFE

 
Airbus
Machine Learning Researcher 

Melanie Ducoffe has been an industrial researcher at the Airbus Research and Technology Center since 2019 and part-time seconded to the DEEL project for the study of robustness in machine learning and its applications to critical systems. Before joining Toulouse, she validated her master's studies with an internship on generative learning with Yoshua Bengio, then completed a doctorate in machine learning at the CNRS in Nice Sophia Antipolis on active learning of deep neural networks. His main current research activities are on the robustness of neural networks, in particular by formal methods.

 

“ High-probability guarantees for surrogate models ”

Abstract:

Embedding simulation models developed during the design of a platform opens new functionalities but is usually very costly in terms of computing power and hardware constraints. Surrogate models are an efficient alternative but require additional certification to guarantee their safety. In this paper, we study the safety of a black-box (e.g., neural network) surrogate model that should over-approximate a black-box reference model. We derive Bernstein-type deviation inequalities to prove high-probability safety bounds on the surrogate model and on shifted versions of it. We demonstrate its relevancy on an industrial use-case. We predict the worst-case braking distance of an aircraft and show how to provably over-approximate the predictions of an already qualified prediction model.

Sébastien GERCHINOVITZ

 
IRT Saint Exupéry / ANITI / IMT
Researcher

Sebastien Gerchinovitz is a researcher at IRT Saint Exupéry, working in the DEEL project on machine learning theory and its applications to critical systems. He is also an associate researcher at the Toulouse Institute of Mathematics, and a member of the "Game Theory and Artificial Intelligence" chair at ANITI. After a PhD in mathematics from the Ecole Normale Supérieure de Paris, he was a lecturer at the University of Toulouse III - Paul Sabatier from 2012 to 2019 and is currently on secondment. His main research topics are machine learning theory, sequential learning and deep learning.

Link: http://www.math.univ-toulouse.fr/~sgerchin/

 

“ High-probability guarantees for surrogate models ”

Abstract:

Embedding simulation models developed during the design of a platform opens new functionalities but is usually very costly in terms of computing power and hardware constraints. Surrogate models are an efficient alternative but require additional certification to guarantee their safety. In this paper, we study the safety of a black-box (e.g., neural network) surrogate model that should over-approximate a black-box reference model. We derive Bernstein-type deviation inequalities to prove high-probability safety bounds on the surrogate model and on shifted versions of it. We demonstrate its relevancy on an industrial use-case. We predict the worst-case braking distance of an aircraft and show how to provably over-approximate the predictions of an already qualified prediction model.

Serge Gratton

 
INP Toulouse / IRIT / ANITI
Professor

Serge Gratton is Professor of Applied Mathematics at the Institut National Polytechnique de Toulouse. He has been coordinating the "Parallel Algorithms and Optimization" team at the Toulouse Institute of Computer Science since 2017. He is developing his research activities around large-scale optimization and data assimilation algorithms.
Since 2019, he holds the "Data Assimilation and Machine Learning" Chair in the Toulouse Institute of Natural and Artificial Intelligence (ANITI) in which he explores techniques for incorporating physical constraints into machine learning algorithms. These activities have applications in sectors such as aeronautics and space, or the environment.

 

Sylvaine PICARD

 
Safran Electronics & Defense
Chief IA Engineer

After starting her career in the world of SMEs, she joined Morpho (now Idemia), a Safran subsidiary that develops cutting-edge techniques in the field of biometric security, in 2004. With her team, she developed the world's first contactless biometric recognition sensor. People will be able to identify themselves by simply passing their fingers in front of the sensor. This innovation enabled her to win the Trophée des Femmes de l'industrie in 2012, in the Woman of Innovation category. This award recognizes a woman who has created "a spectacular innovation that challenges the usual practices of the sector". A few years later, her career took another turn when she joined Safran Tech, the Safran group's corporate R&T center. She became head of a research team in image processing and artificial intelligence. With her colleagues, she develops algorithms capable of controlling the quality of aeronautical parts. She is also working on training algorithms for autonomous ground and air vehicles. She actively participates in the DEEL project certification mission and is part of the team working on the definition of the Confiance.AI program for Safran. Recently, she became Chief AI Engineer at Safran Electronics and Defense.
A few years later, her career took another turn when she joined Safran Tech, the corporate R&T center of the Safran group. She becomes responsible for a research team in image processing and artificial intelligence. With her colleagues, she develops algorithms capable of controlling the quality of aeronautical parts. It is also working on the training of algorithms that will equip autonomous land and air vehicles. She actively participates in the certification mission of the DEEL project and for Safran is part of the team working on the definition of the Confiance.AI program.
Recently, she became Chief IA Engineer at Safran Electronics and Defense.

Franck MAMALET

 
IRT Saint Exupéry
Technical Leader in Artificial Intelligence

Franck Mamalet is an AI expert at IRT Saint Exupéry since 2018, working mainly in the DEEL project in machine learning theory and its applications to critical systems. He is also co-leader of the project's certification mission aiming to bring together specialists in Certification, critical embedded systems, and machine learning. Before joining IRT, he was from 1997 to 2006, researcher at France Telecom R&D on embedded systems for image processing, from 2006 to 2015 at Orange Labs he conducted research on convolutional and recurrent neural networks for image and video indexing. In 2015 he took over the management of the R&D cell of Brainchip/Spikenet a startup specialized in neural networks at Spikes. His current research focuses on robustness and quantification of neural networks, and building the link to Machine Learning certification.

“DEEL - Certification Mission”

Abstract:

The certification mission is a workgroup within the DEEL project gathering about 114 experts in the fields of certification, dependability, critical embedded systems development, and Machine Learning. The industrial and academic members of the workgroup come from the aeronautics, railway, automotive and energy fields. Some members of the group are involved in other institutions or projects on AI certification such as AVSI (Aerospace Vehicle Systems Institute), SOTIF (Safety Of The Intended Functionality), and WG-XNUMX of EUROCAE (European Organisation for Civil Aviation Equipment). This group, with its proximity to the DEEL project researchers, aims at bridging the gap between research on robust and explainable Machine Learning and certification for critical systems. We will present the first results of this group, with in particular the recently published White Paper "Machine Learning in Certified Systems" (https://hal.archives-ouvertes.fr/hal-XNUMX).https://hal.archives-ouvertes.fr/hal-03176080).

Edouard PAUWELS

 
Toulouse III University - Paul Sabatier
Lecturer

Edouard Pauwels is a lecturer at the University of Toulouse III - Paul Sabatier. He conducts his work at the Toulouse Computer Science Research Institute (IRIT) in the argumentation, decision, reasoning, uncertainty and learning team. His research focuses on digital optimization, its applications in artificial intelligence and the guarantees it provides.

 

Towards a model of algorithmic differentiation for AI "

Abstract: 

The operation of deriving a given numerical function in the form of a program is one of the central components of modern AI, especially in deep learning. In this framework, algorithmic differentiation is widely and routinely used outside its domain of validity. The presentation will introduce a model to describe and illustrate the artifacts that mar this mode of computation and discuss the qualitative consequences on learning algorithms.

 

Jean-Michel Loubes

 
IMT / University Toulouse 3
Paul Sabatier
Professor

Jean-Michel Loubes is Professor of Applied Mathematics at the Toulouse Institute of Mathematics and holder of the "Fair and Robust Learning" Chair at the Toulouse Institute of Natural and Artificial Intelligence (ANITI). After a PhD in Toulouse and Leiden, he was from 2001 to 2007 a CNRS researcher at the University of Orsay and then at the University of Montpellier. Since 2007, he is a professor at the University of Toulouse, leading the Statistics and Probability team from 2008 to 2012. He has worked in mathematical statistics on estimation methods and optimal speeds in Machine Learning. His current research focuses on the application of optimal transport theory in Machine Learning and on the issues of fairness and robustness of Artificial Intelligence. Very involved in the links between academia and industry, he was regional manager from 2012 to 2017 of the Agence de Valorisation des Mathématiques of the CNRS and sits since 2019 on the Scientific Committee of the Institut des Sciences Mathématiques et de leur Interaction (INSMI) of the CNRS.

Mario MARCHAND

 
Laval University
Professor

Mario Marchand is a professor in the Department of Computer Science and Software Engineering at Université Laval. He has been working in the field of machine learning for more than 30 years. He first became interested in neural networks and then in performance guarantees for learning algorithms. Over the years, he has proposed several learning algorithms optimizing performance guarantees such as set covering machines and decision list machines producing interpretable models while performing a form of data compression. He also proposed kernel methods and boosting algorithms that optimize PAC-Bayesian guarantees. Some of these algorithms have been used to predict the type of co-receptor used by the HIV virus and to predict antibiotic resistance in bacteria from their genome.

" The challenges of learning interpretable predictive models "

Abstract: 

A predictive model is said to be interpretable if it can be scrutinized, analyzed, and criticized by human expertise. As such, interpretability allows us to increase or decrease our trust in a model---which is essential for its deployment in critical areas that can impact human health, security, and well-being. Inspired by the so-called “accuracy-interpretability trade-off”, the current main trend is to build accurate black-box models and then perform post-hoc explanations on individual instances. However, since interpretable models can also be accurate, I will first review the main difficulties of achieving efficient learning of transparent models such as decision trees, DNF Boolean formulae and decision lists. I will then propose three research directions where these difficulties could perhaps be circumvented, namely: 1) learning stochastic interpretable models, 2) local risk minimization of interpretable models, 3) learning interpretable mixtures of experts.

François Laviolette

 
Laval University
Director of the Big Data Research Center

François Lavioletteis a full professor in the Department of Computer Science and Software Engineering at Université Laval. His research focuses on artificial intelligence, particularly machine learning. He is the leader of the PAC-Bayesian theory, which allows us to better understand machine learning algorithms and to design new ones. He is interested in machine learning algorithms for solving problems related to genomics, proteomics and drug discovery. He is also interested in making artificial intelligences interpretable in order to better integrate systems where humans are in the decision loop. He is the director of the Centre de recherche en données massives (CRDM) at Laval University, which includes more than XNUMX research professors.CRDMfrom Laval University.

 

Louise Travé-Massuyès

 
Research Director - CNRS
Chair in diagnostics - Aniti

Louise Travé-Massuyès holds a position of Directrice de Recherche at Laboratoire d'Analyse et d'Architecture des Systèmes, Centre National de la Recherche Scientifique (LAAS-CNRS, https://www.laas.fr), Toulouse, France. She graduated in control engineering from the Institut National des Sciences Appliquées (INSA), Toulouse, France, in 1982 and received the Ph.D. degree from INSA in 1984. Her main research interests are in dynamic systems diagnosis and supervision with special focus on model-based and qualitative reasoning methods and data mining. She has been particularly active in establishing bridges between the diagnosis communities of Artificial Intelligence and Automatic Control. She is member of the International Federation of Automatic Control IFAC (https://www.ifac-control.org/) Safeprocess Technical Committee, Treasurer of the Society of Automatic Control, Industrial and Production Engineering (https://www.sagip.org). She holds the chair “Synergistic transformations in model-based and data-based diagnosis“ in the Artificial and Natural Intelligence Toulouse Institute (ANITI), France (https://aniti.univ-toulouse.fr) and serves as Associate Editor for the well-known Artificial Intelligence Journal (https://www.journals.elsevier.com/artificial-intelligence).

“Diagnosis as a key element for autonomy & dependability”

ABSTRACT: 

Diagnosis and state tracking are critical tasks for autonomous systems because they strongly influence the decisions that are made and they may be essential to the life of the system. They provide the means to identify faults and to react to the various hazards that can affect the system, achieving the desired level of dependability. In this talk, I will present the role of diagnosis in autonomous systems and two case studies illustrating the point in the space domain.

Adrien GAUFFRIAU

 
Airbus
Critical Software and Data Scientist Engineer

Adrien gauffriau graduated at Ecole Centrale de Nantes. He holds a Master from Supaéro in Critical Embedded Systems. He developed Flight By Wires software of the Airbus A350 and explored the use of multi-core and many-core processors for critical systems. His current interests include the future of embedded artificial intelligence in systems for the transport industry.

Baptiste LEFEVRE

 
Thales Avionics
Advanced Technologies Regulation Manager

Baptiste Lefevre is Advanced Technologies Regulation Manager at Thales Avionics. He is in charge of developing solutions for the certification of Artificial Intelligence products. He is member of various Working Groups on this matter, and in particular the French Research Group DEEL, the Standardization Working Group EUROCAE WG-114 / SAE G-34, and the AVSI Research Group AFE-87.
In addition to AI certification, Baptiste is also in charge of Thales technical representation at ICAO level, and of the implementation of Thales Safety Management System.
Before joining Thales, Baptiste was Quality and Standardization Manager of the French Civil Aviation Authority, where he ensured consistent and proportionate implementation of the regulation in all aviation sectors in France.