Abstracts and bios of the 1st TrustMeIA Workshop, Toulouse, 5th July 2019
Michael Fisher , Univ. Liverpool, UK Title : Trust me - you can see my intention Abstract : Trust is a complex issue, at least encompassing trust in the reliable functioning of a system and, in the case of autonomous systems, trust in what the system is trying to do. The first of these matches traditional reliability engineering; the second requires that we (correctly) expose the "intentions" the systemhas. In this talk, I will describe how the use of rational agents at the core of autonomous systems forms the basis for not only transparency and explainability, but verifiability of behaviour and decision-making. Once we have robotic systems based on such hybrid agent architectures, then we can strongly analyse them and move towards not only trustworthiness, but safe, ethical and responsible behaviour. Bio : Professor Michael Fisher is Royal Academy of Engineering Chair in Emerging Technologies in the Department of Computer Science at the University of Liverpool, and Director of the University's Centre for Autonomous Systems Technology. He is Fellow of both the BCS and the IET, serves on several (IEEE/BSI) standards committees on autonomous systems, robotics and AI, and leads the UK Network on the Verification and Validation of Autonomous Systems. He is currently involved in a range of research projects developing (semi)autonomous robotics for use in hazardous environments. |
Joao Marques-Silva , Univ. Lisbon, PT Title: Constraint-Based Explanations for Machine Learning Models Abstract: The practical successes of Machine Learning (ML) in different settings motivates the ability of computing small explanations for predictions made. Small explanations are generally accepted as easier for human decision makers to understand. Existing work on computing explanations is based on heuristic approaches, providing no guarantees of quality, in terms of how close such solutions are from cardinality- or subset-minimal explanations. This talk describes a novel constraint-agnostic approach for computing explanations for any Machine Learning (ML) model. The proposed solution exploits abductive reasoning, and imposes the requirement that the ML model be represented as sets of constraints using some target constraint reasoning system, for which the decision problem can be answered with some oracle. Paper Link: https://reason.di.fc.ul.pt/~jpms/Drops/inms-aaai19.pdf Bio: Joao Marques-Silva is Professor of Informatics, Faculty of Science, University of Lisbon (FCUL). Before joining FCUL, he was affiliated with Instituto Superior Tecnico, Portugal; University College Dublin, Ireland; and University of Southampton, United Kingdom. Dr. Marques-Silva is a Fellow of the IEEE and was a recipient of the 2009 CAV Award for fundamental contributions to the development of high-performance Boolean satisfiability solvers. Dr. Marques-Silva's research interests include computational logic, automated reasoning, knowledge representation and reasoning, machine learning and their applications in data science, software engineering and operations research. |
Abstract : Computer vision techniques have made considerable progress in recent years. This advance makes now possible the practical use of computer vision in civil drones or aircraft, in replacement of human pilots. The question that naturally arises is then to provide a way to certify those types of systems at a given level of safety. The aim of the talk is to understand the gap between today's computer vision systems and the current certification standards and to identify the key activities to fulfill to make computer-vision systems certifiable. Bio : Since 2005, Claire Pagetti is a researcher at ONERA and since 2007 an associate professor at ENSEEIHT. Her fields of interest concern the safe implementation of control command avionic applications on avionic platforms. She has contributed to several industrial, European and French projects that lead to several publications, industrial development and a patent. |
Abstract : Complete validation and verification of the software of an autonomous robot (AR) is out of reach for now. Still, this should not prevent us from trying to verify some properties on components and their integration. There are many approaches to consider for the V&V of AR software, e.g. write high level specifications and derive them in correct implementations, deploy and develop new or modified V&V formalisms to program robotics components, etc. Learned models put aside, most models used in deliberation functions [2] are amenable to formal V&V [1] and focussing on the functional level V&V is probably more challenging. We propose an approach which rely on an existing robotics specification/implementation framework (GenoM) to deploy robotics functional components, to which we harness existing well known formal V&V frameworks (UPPAAL, BIP, FIACRE/TINA). GenoM was originally developed by roboticists and software engineers, who wanted to clearly and precisely specify how a reusable, portable, middleware independent, functional component should be written and implemented. Many complex robotic experiments have been developed and deployed over 20 years using GenoM and it is only recently that its designers realized that the rigorous specification, a clear semantic of the implementation and the template mechanism to synthesize code opens the door to automatic formal model synthesis and formal V&V (offline and online). This bottom up approach, which starts from components implementation, may be more modest than the top down ones which aim at a broader and more global view of the problem. Yet, it gives encouraging results on real implementations on which one can build more complex high level properties to be then V&V offline but also with online monitors. [1] Félix Ingrand. Recent Trends in Formal Validation and Verification of Autonomous Robots Software. IEEE International Conference on Robotic Computing, Feb 2019, Naples, Italy. ⟨hal-01968265⟩ https://hal.laas.fr/hal-01968265 |
Mario Gleirscher , Univ. York, UK Title : Risk Structures: An Algebraic Approach to Risk-aware Systems Abstract: To achieve acceptable safety, autonomous systems will have to perceive and reduce risk by incorporating risk models and risk handling mechanisms that enhance their mission controllers. Complex environments and the missing fallback to human operators pose tough challenges to the engineering of risk handlers, particularly, to the hazard analysis and risk modelling leading to such handlers. This talk will discuss a formal framework for risk modelling and analysis and an algebraic method for the step-wise design of risk handlers for risk-aware systems. The talk will focus on the concept of mitigation orders useful for property preservation across handler design steps. Furthermore, it will outline how the designed risk handlers can be used as corrective monitors to extend mission controllers of such systems. The talk will elaborate on the preprint available from https://arxiv.org/abs/1904.10386 . Bio: Mario Gleirscher was born in Hall, Austria. He is a qualified production engineer and received the M.Sc. degree in computer science with a minor in mathematics from the Technical University of Munich (TUM), Germany, in 2005. He has collected several years of practical experience as a consultant, systems engineer, and software developer. Returning to academia, he received the Ph.D. degree in computer science from TUM, in 2014. He has been a visiting research fellow at the University of York, U.K. His research interests cover applied formal methods, particularly, algebraic methods, formal reasoning about risk in machines, and the design of risk-aware autonomous machines. Dr. Gleirscher is a member of the ACM, IEEE, and GI. |