16th International Workshop on Boolean Problems

Keynote Speaker


Thursday | Keynote 1 | September 19


Jan Peleska
Universit├Ąt Bremen

Scary or Promising ? Machine Learning in Safety-Critical Control Systems



Abstract:
One of the challenges our society faces today has been caused by a recent change of paradigm in computer science: the advent of powerful applicable artificial intelligence (AI). This topic is currently discussed nearly everywhere in the media. In this talk, I will focus on a specific "sub-challenge", namely the risks involved in applying machine learning in safety-critical applications and the possibilities to mitigate these risks such that they become socially acceptable. Coping with this challenge is currently of considerable importance, since (1) our society has started to take technical safety for granted (2) large enterprises have replaced technical specialists in the upper management layers by accountants, controllers and true believers in share holder value, and (3) autonomous systems (road vehicles, trains, robots, drones, etc.) have become tempting business cases, but cannot be operated without the application of machine learning for safety-critical control components. As it turns out, the specific risk induced by using machine learning (ML) in safety-critical control systems is not really AI-specific or ML- specific. The root cause of these risks lies in the fact that no globally valid logical specification is given how arbitrary elements of the input space should be transformed into outputs. The expected output is only specified for a training and verification set of data which represents a tiny fraction of all the elements in the input space. The same problem occurs in increasingly complex applications that do not rely on AI at all: the complexity prevents system designers from creating comprehensive models describing the expected system behaviour. Instead, so-called scenario libraries are created, specifying how the system should behave in certain situations. For systems of this kind, it is necessary to determine the residual risk for uncovered inputs or uncovered scenarios that could occur during real-world operation. Using a trained neural network for obstacle detection in autonomous trains as an example, we will demonstrate how such estimates can be calculated, using a combination of mathematical analysis and statistics. It is shown that the statistical part of this approach can also be used to determine the residual risks of missing scenarios in complex system specifications.

CV:
Since 1995, Dr. Peleska is professor for computer science (operating systems and distributed systems) at Bremen University in Germany. At the University of Hamburg, he studied mathematics and wrote his doctoral thesis on a topic in the field of differential geometry. From 1984 to 1990 he worked with Philips as Senior Software Designer and later on as department manager in the field of fault-tolerant systems, distributed systems and database systems. From 1990 to 1994 he was manager of a department at Deutsche System-Technik responsible for the development of safety-critical embedded systems. Since 1994 he has worked as a consultant, specialising on development methods, verification, validation and test of safety-critical systems. His habilitation thesis focusing on Formal Methods for the development of dependable systems was finished in 1995. Together with his wife Cornelia Zahlten, he has founded the company Verified Systems International GmbH in 1998, providing tools and services in the field of safety-critical system development, verification, validation and test. His current research interests include formal methods for the development of dependable systems, test automation based on formal methods with applications to embedded real-time systems, verification of security properties, and formal methods in combination with CASE methods. Current industrial applications of his research work focus on the development and verification of avionic software, space mission systems and railway and automotive control systems.


Friday | Keynote 2 | September 20


Lars Hedrich
Johann Wolfgang Goethe-Universit├Ąt, Frankfurt am Main

Automatic Synthesis of Analog Neural Networks for Edge Services




Abstract:
N.N.

CV:
Lars Hedrich is a full professor at the Institute of Computer Science, University of Frankfurt, where he is head of the design methodology group. He was born in Hanover, Germany, in 1966 and graduated (Dipl.-Ing.) in electrical engineering at the University of Hanover in 1992. In 1997, he received the Ph.D. degree and became an assistant professor at the same university in 2002, before he moved to Frankfurt in 2004. His research interests include several areas of analog design automation: symbolic analysis of linear and nonlinear circuits, behavioral modeling, automatic circuit synthesis, formal verification and robust design.


Friday | Keynote 3 | September 20


N.N.
TBD

Title





Abstract:
N.N.

CV:
N.N.