MobiliT.AI is an international event that brings together a community of experts in artificial intelligence (AI) for critical systems in the field of transport and mobility (aeronautics, automotive, rail, space, drones, etc.). This community is made up of researchers from academia or industry, or actors from companies in the technology or mobility sector, as well as experts in operational safety and engineering of critical embedded systems.
Montreal hosted in 2019 the 1st edition of MobiliT.AI during which around 170 experts and 26 speakers and panelists gathered. In 2021, in the aftermath of the COVID-19 pandemic, MobiliT.AI was offered in a 100% formula. virtual, and brought together some 350 participants over thirty panels and conferences.
The 2022 edition will see a face-to-face return, as the event will be presented live from the Promotion Hall of the Petit Séminaire de Québec, in the heart of the historic center of Quebec City, Canada.
It is uncommon, in science history, that the adoption of a new technology by the industry is so fast that it drives research. It is the case today with artificial intelligence where the important theoretical progress about deep learning lead to an unprecedented interest, so that these new developments are the cor of technological innovations. However, this infatuation raises brand new issues. Machine learning, especially deep neural networks, can perform well enough to consider human critical applications (autonomous vehicles, predictive maintenance, medical assessment, ...) but their theoretical properties are not well-known yet. These scientific issues make difficult the supporting of the constraints required for a general application (algorithm certification, qualification, explainability) and the acceptation by public at large.
Certification and Theoretical Guarantees
To certify an artificial intelligence algorithm, you must be able to determine its limits and its theoretical properties. While these properties are well known for certain methods such as linear models and their extension, some of the most widely used approaches today are difficult to analyze from a mathematical point of view. For a certification, several types of guarantees are desirable. The guarantees on the convergence of a learning algorithm make it possible to ensure that the learning process will lead to a reasonable model sufficiently general to be used for prediction. Bounds on the error of a model guarantee a certain control over the risk essential for applications. In the case of deep neural networks, this type of properties is still very difficult to obtain. For example, initial results suggest that convergence towards local optima could be less penalizing than expected. But we also know that adversarial networks, whose possible field of applications is very wide, are particularly unstable and very difficult to optimize. The robustness of networks is also difficult to guarantee. It has been shown that a network can be tricked into building examples exploiting the structure and parameters of networks. A certified algorithm will need to be able to resist this type of attack to some extent.
Another type of desirable guarantee is the guarantee of fairness of treatment. If a database is biased, we know that learning algorithms will tend to reproduce or even amplify this bias. The goal of ethical learning (fair learning) is to guarantee the absence of bias according to a given variable. This can be achieved by debugging the learning base or constraining the model. When the bias comes from a problem of data collection, the fair learning makes it possible to reduce the error in generalization. When bias is a societal bias, then fair learning is a way to satisfy laws under development that want to force learning algorithms to have a fair impact on different categories of the population.
Certification and hybrid approaches
If artificial intelligence, by offering incredibly efficient and adaptable methods, could probably pave the way for a series of technological and industrial revolutions, it is legitimate to ask the question of the relevance or the comparison with so-called "classic" approaches. ". The approaches based on logical or mathematical models, in addition to providing, to a large extent, an answer to the problems of certifiability and explainability, are still used to solve certain problems, in particular those with a strong combinatorial aspect.
One goal of this workshop is to confront classical approaches to the new machine learning algorithms and to discuss what are the synergies between them and how could they hybridize.
Model reduction enables to embark systems that can be extremely complex and costly in terms of computation time. The idea behind reduction is to use machine learning methods to simulate the complex model while reducing the computation cost. The main challenge of this type of technique is to check if this improvement is made without significantly degrading the performances. We also expect that the learned model does not lead to abnormal behavior.
More generally, the embedding of artificial intelligence models into systems, especially deep neural networks, raises numerous scientific challenges. The first issue is to constrain the network into order to keep it tractable and parsimonious in its resource usage and in particular energy consumption. The system specifications may also restrict the way neural networks operate. For instance, they can be very sensitive to the precision of the float encoding. Several other aspects of embedded systems are critical as well. In particular, asserting their robustness with respect to failure, ensuring calculus redundancy and providing correctness proofs for algorithms are all important challenges.