This page is dedicated to the (informal) VeriDis seminar. Our seminar (actually closer to a working group) takes place roughly every 2 weeks.
Friday 23rd October 2020 (1pm): Dylan Marinho (Université de Lorraine 🇫🇷)
Title: Defeating Opaque Predicate using Binary Analysis and Machine Learning
Program obfuscation, since it permits to prevent reverse engineering analysis while preserving a same behavior, is nowadays introduced as a method to protect a software, particularly for intellectual property matters. However, this tool is also used by malware designer in order to bypass investigations and be able to propagate itself.
Therefore, obfuscation is a scientific problem that occurs since that we try to analysis this kind of programs. De-obfuscation methods were introduced, but still remain insufficient. The main goal of this work was to introduce a method, combining binary analysis and machine learning, to detect an obfuscation by “opaque predicates”.
Friday 16th October 2020 (1pm): Johan Arcile (Université de Lorraine 🇫🇷)
Title: Modeling and formal verification of a communicating autonomous vehicle system
A system of autonomous vehicles consists of several agents individually making decisions in real time while exchanging information with other agents nearby.
The validation of formal logic properties in such systems is not possible through naive approaches, due to the large number of variables involved in their representation.
The two complementary works that will be presented have been developed to address this issue:
-The VerifCar software framework, dedicated to decision-making analysis of communicating autonomous vehicles, which uses the state-of-art tool Uppaal.
-The formalism of MAPTs and its dedicated exploration algorithms, allowing the use of heuristics that reduces the computation time needed to address reachability problems.
Friday 31st January 2020 (11am): Étienne André (Université de Lorraine 🇫🇷)
Title: Parametric Timed Model Checking for Guaranteeing Timed Opacity
Information leakage can have dramatic consequences on systems security. Among harmful information leaks, the timing information leakage is the ability for an attacker to deduce internal information depending on the system execution time.
We address the following problem: given a timed system, synthesize the execution times for which one cannot deduce whether the system performed some secret behavior. We solve this problem in the setting of timed automata (TAs). We first provide a general solution, and then extend the problem to parametric TAs, by synthesizing internal timings making the TA secure. We study decidability, devise algorithms, and show that our method can also apply to program analysis.
This is a joint work with Sun Jun (SMU, Singapore)
Friday 17th January 2020 (11am): Masaki Waga (和賀 正樹) (Tokyo 🇯🇵)
Friday 10th January 2020 (11am): Stephan Merz (Université de Lorraine 🇫🇷)
Title: Auxiliary Variables Revisited
When proving refinement between two specifications, refinement mappings are the standard technique for explaining how the lower-level specification implements the internal state components of the higher-level specification. However, it is well known that refinement mappings by themselves are incomplete, and it has been suggested to augment the lower-level specification by history and prophecy variables in order to obtain a more complete proof method. We introduce new rules for prophecy variables that both yield stronger completeness results and, we believe, are simpler to apply in practice. Our approach is based on two ideas: simple prophecy variables yield one-shot predictions over a finite time horizon, and several such variables can be combined into an array for making prophecies over entire executions.
(joint work with Leslie Lamport)
(Below is just a dummy template for future events.)
Friday 16th October 2020 (1pm): TBA (Université de Lorraine 🇫🇷)