This page is dedicated to the (informal) VeriDis seminar. Our seminar (actually closer to a working group) takes place roughly every 2 weeks.
Friday 8th January 2021 (1pm): Mathieu Montin (Université de Lorraine 🇫🇷)
Title: An Agda tutorial : language and development environment
Agda is a fully functionally, dependently typed language developed in the late 2000s by Ulf Norell in his PhD thesis. While the language was originally directed towards programming and treated dependent types as a mean to perfect this activity, it quickly became apparent that Agda could equally be used as a proof assistant by exploiting the well known Curry-Howard correspondence. Agda comes with several convenient features which makes programming and verifying using the language both powerful and satisfying, which this introduction aims at demonstrating. From the definition of basic data types to more advanced predicates, we will explore the possibilities of the language as well as the automation and help the tools around it provide through the programming effort.
Friday 11th December 2020 (1pm): Jawher Jerray (Université Sorbonne Paris Nord 🇫🇷)
Friday 4th December 2020 (1pm): Engel Lefaucheux (MPI Saarbrücken 🇩🇪)
Title: On Information Control in Probabilistic Systems: a closer look on Opacity and Diagnosis
The control of the information given by a system has recently seen increasing importance due to the omnipresence of communicating systems, the need for privacy, etc.
This control can be used in order to disclose to an observer an information of the system, or, oppositely, to hide one.
In this talk, I will consider the control of the information released by a system represented by a stochastic model such as a Markov chain.
Here, an external observer is interested in detecting a particular set of relevant paths of the system.
However, he can only observe those paths through an observation function which obfuscates the real behaviour of the system.
Exact disclosure of the information occurs when the observer can deduce from a finite
observation that the path is relevant, the approximate disclosure variant corresponding to the path being identified as relevant with high accuracy.
I will discuss the problems of diagnosability and opacity, which corresponds, in spirit, to the cases where one wants to disclose all the information or hide as much of it as possible with a focus on the approximate notion of disclosure.
Friday 27th November 2020 (1pm): Guillaume Ambal (Université de Rennes 1 🇫🇷)
Title: From Big-Step to Small-Step Skeletal Semantics
Friday 13th November 2020 (1pm): Mathieu Montin (Université de Lorraine 🇫🇷)
Cyber physical systems (CPS) are usually complex systems which are often critical,meaning their failure can have significant negative impacts on human lives. A key point in their development is the verification and validation (V & V) activities which are used to assess their correctness towards user requirements and the associated specifications. This process aims at avoiding failure cases, thus preventing any incident or accident. In order to conduct these V & V steps on such complex systems, separations of concerns of various nature are used. In that purpose, the system is modelled using heterogeneous models that have to be combined together. The nature of these separations of concerns can be as follows: horizontal, which corresponds to a structural decomposition of the system; vertical, which corresponds to the different steps leading from the abstract specification to the concrete implementation; and transversal, which consists in gathering together the parts that are thematically identical (function, performance, security, safety…). These parts are usually expressed using domain specific modelling languages (DSML), while the V & V activities are historically conducted using testing and proofreading, and more and more often, using formal methods, which is advocated in our approach. In all these cases, the V & V activities must take into account these separations in order to provide confidence in the global system from the confidence of its sub-parts bound to the separation in question. In other words, to ensure the correctness of the system, a behavioural semantics is needed which has to rely on the ad-hoc semantics of the subsystems. In order to define it, these semantics must be successfully combined in a single formalism. This thesis stems from the GEMOC project: a workbench that allows the definition of various languages along with their coordination properties, and target the formal modelling of the GEMOC core through the association of trace semantics to each preoccupation and the expression of constraints between them to encode the correct behaviour of the system. This thesis follows several other works conducted under the TOPCASED, OPEES, QUARTFT}, P and GEMOC projects, and provides four contributions in that global context: the first one proposes a methodology to give an operational semantics to executable models illustrated through two case studies: PetriNets and SimplePDL. The second one proposes a formal context on which refinement can be expressed to tackle vertical separation. The third one gives a denotational semantics to CCSL, which is the language that is currently used in the GEMOC projects to express behavioural properties between events from one or several models, possibly heterogeneous. Finally, the fourth one proposes an investigation on how to extend CCSL with the notion of refinement we proposed. All these contribution are mechanized in the AGDA proof assistant, and thus have been modelled and proven in a formal manner.
Friday 23rd October 2020 (1pm): Dylan Marinho (Université de Lorraine 🇫🇷)
Title: Defeating Opaque Predicate using Binary Analysis and Machine Learning
Program obfuscation, since it permits to prevent reverse engineering analysis while preserving a same behavior, is nowadays introduced as a method to protect a software, particularly for intellectual property matters. However, this tool is also used by malware designer in order to bypass investigations and be able to propagate itself.
Therefore, obfuscation is a scientific problem that occurs since that we try to analysis this kind of programs. De-obfuscation methods were introduced, but still remain insufficient. The main goal of this work was to introduce a method, combining binary analysis and machine learning, to detect an obfuscation by “opaque predicates”.
Friday 16th October 2020 (1pm): Johan Arcile (Université de Lorraine 🇫🇷)
Title: Modeling and formal verification of a communicating autonomous vehicle system [slides]
A system of autonomous vehicles consists of several agents individually making decisions in real time while exchanging information with other agents nearby.
The validation of formal logic properties in such systems is not possible through naive approaches, due to the large number of variables involved in their representation.
The two complementary works that will be presented have been developed to address this issue:
-The VerifCar software framework, dedicated to decision-making analysis of communicating autonomous vehicles, which uses the state-of-art tool Uppaal.
-The formalism of MAPTs and its dedicated exploration algorithms, allowing the use of heuristics that reduces the computation time needed to address reachability problems.
Friday 31st January 2020 (11am): Étienne André (Université de Lorraine 🇫🇷)
Title: Parametric Timed Model Checking for Guaranteeing Timed Opacity
Information leakage can have dramatic consequences on systems security. Among harmful information leaks, the timing information leakage is the ability for an attacker to deduce internal information depending on the system execution time.
We address the following problem: given a timed system, synthesize the execution times for which one cannot deduce whether the system performed some secret behavior. We solve this problem in the setting of timed automata (TAs). We first provide a general solution, and then extend the problem to parametric TAs, by synthesizing internal timings making the TA secure. We study decidability, devise algorithms, and show that our method can also apply to program analysis.
This is a joint work with Sun Jun (SMU, Singapore)
Friday 17th January 2020 (11am): Masaki Waga (和賀 正樹) (Tokyo 🇯🇵)
Friday 10th January 2020 (11am): Stephan Merz (Université de Lorraine 🇫🇷)
Title: Auxiliary Variables Revisited
When proving refinement between two specifications, refinement mappings are the standard technique for explaining how the lower-level specification implements the internal state components of the higher-level specification. However, it is well known that refinement mappings by themselves are incomplete, and it has been suggested to augment the lower-level specification by history and prophecy variables in order to obtain a more complete proof method. We introduce new rules for prophecy variables that both yield stronger completeness results and, we believe, are simpler to apply in practice. Our approach is based on two ideas: simple prophecy variables yield one-shot predictions over a finite time horizon, and several such variables can be combined into an array for making prophecies over entire executions.
(joint work with Leslie Lamport)
(Below is just a dummy template for future events.)
Friday 16th October 2046 (1pm): TBA (Université de Lorraine 🇫🇷)