Research


Overall objectives

The VeriDis project team includes members of the Formal Methods department at LORIA, the computer science laboratory in Nancy, and members of the research group Automation of Logic at Max Planck Institut für Informatik in Saarbrücken. It is headed by Stephan Merz and Christoph Weidenbach. VeriDis was created in 2010 as a local research group of Inria Nancy – Grand Est and has been an Inria project team since July 2012.

The objectives of VeriDis are to contribute to advances in verification techniques, including automated and interactive theorem proving, and to make them available for the development and analysis of concurrent and distributed algorithms and systems, based on mathematically precise and practically applicable development methods. The techniques that we develop are intended to assist designers of algorithms and systems in carrying out formally verified developments, where proofs of relevant properties, as well as bugs, can be found with a high degree of automation.

Within this context, we work on techniques for automated theorem proving for expressive languages based on first-order logic, with support for theories (including fragments of arithmetic or of set theory) that are relevant for specifying algorithms and systems. Ideally, systems and their properties would be specified using high-level, expressive languages, errors in specifications would be discovered automatically, and finally, full verification could also be performed completely automatically. Due to the fundamental undecidability of the problem, this cannot be achieved in general. Nevertheless, we have observed important advances in automated deduction in recent years, to which we have contributed. These advances suggest that a substantially higher degree of automation can be achieved over what is available in today’s tools supporting deductive verification. Our techniques are developed within SMT (satisfiability modulo theories) solving and first-order logic reasoning based on superposition, the two main frameworks of contemporary automated reasoning that have complementary strengths and weaknesses, and we are interested in making them converge when appropriate. Techniques developed within the symbolic computation domain, such as algorithms for quantifier elimination for appropriate theories, are also relevant, and we are working on integrating them into our portfolio of techniques. In order to handle expressive input languages, we are working on techniques that encompass tractable fragments of higher-order logic, for example for specifying inductive or co-inductive data types, for automating proofs by induction, or for handling collections defined through a characteristic predicate.

Since full automatic verification remains elusive, another line of our research targets interactive proof platforms. We intend these platforms to benefit from our work on automated deduction by incorporating powerful automated backends and thus raise the degree of automation beyond what current proof assistants can offer. Since most conjectures stated by users are initially wrong (due to type errors, omitted hypotheses or overlooked border cases), it is also important that proof assistants be able to detect and explain such errors rather than letting users waste considerable time in futile proof attempts. Moreover, increased automation must not come at the expense of trustworthiness: skeptical proof assistants expect to be given an explanation of the proof found by the backend prover that they can certify.

Model checking is also an established and highly successful technique for verifying systems and for finding errors. Our contributions in this area more specifically target quantitative, in particular timed or probabilistic systems. A specificity of VeriDis is notably to consider partially specified systems, using parameters, in which case the verification problem becomes the synthesis of suitable parameter valuations.

Our methodological and foundational research is accompanied by the development of efficient software tools, several of which go beyond pure research prototypes: they have been used by others, have been integrated in verification platforms developed by other groups, and participate in international competitions. We also validate our work on verification techniques by applying them to the formal development of algorithms and systems. We mainly target high-level descriptions of concurrent and distributed algorithms and systems. This class of algorithms is by now ubiquitous, ranging from multi- and many-core algorithms to large networks and cloud computing, and their formal verification is notoriously difficult. Targeting high levels of abstraction allows the designs of such systems to be verified before an actual implementation has been developed, contributing to reducing the costs of formal verification. The potential of distributed systems for increased resilience to component failures makes them attractive in many contexts, but also makes formal verification even more important and challenging. Our work in this area aims at identifying classes of algorithms and systems for which we can provide guidelines and identify patterns of formal development that makes verification less an art and more an engineering discipline. We mainly target components of operating systems, distributed and cloud services, and networks of computers or mobile devices.

Beyond formal system verification, we pursue applications of some of the symbolic techniques that we develop in other domains. We have observed encouraging success in using techniques of symbolic computation for the qualitative analysis of biological and chemical networks described by systems of ordinary differential equations that were previously only accessible to large-scale simulation. Such networks include biological reaction networks as they occur with models for diseases such as diabetes or cancer. They furthermore include epidemic models such as variants and generalizations of SEIR1 models, which are typically used for Influenza A or Covid-19. This work is being pursued within a large-scale interdisciplinary collaboration. It aims for our work grounded in verification to have an impact on the sciences, beyond engineering, which will feed back into our core formal methods community.

Last activity report : 2023

Results

New results

Automated and Interactive Theorem Proving

Contributions to SMT Techniques

Quantifier Handling in Higher-Order SMT.

Joint work with Haniel Barbosa (Univ. Federal de Miras Gerais, Brazil).

SMT solvers have throughout the years been able to cope with increasingly expressive logics, from ground formulas to full first-order logic (FOL). In the past, we proposed a pragmatic extension for SMT solvers to support higher-order logic reasoning natively without compromising performance on FOL reasoning, thus leveraging the extensive research and implementation efforts dedicated to efficient SMT solving. However, the higher-order SMT solvers resulting from this work are not as effective as we would expect given their performances in first-order logic. We believe this comes from the fact that only the core of the SMT solver has been extended, ignoring in particular the modules for quantifier instantiation.

This motivated us to start working on an extension of the main quantifier-instantiation approach (congruence closure with free variables, CCFV) to higher-order logic in 2020. We are working on an encoding of the CCFV higher-order problem into a set of SAT constraints. In previous years, we concentrated our efforts on the theory, to prove the soundness and completeness of our approach, and developed pseudo-code for all elements of CCFV computation. In 2022 and 2023, these algorithms were implemented in a C++ library, and they were tested on benchmarks from the SMT-lib collection. We started to integrate this library within a new SMT framework. The library will eventually be released under an open-source permissive license.

Automated reasoning techniques beyond SMT

Extensions of a formal framework for automated reasoning.

We are part of a group developing a framework for formal refutational completeness proofs of abstract provers that implement automated reasoning calculi, especially calculi based on saturation such as ordered resolution and superposition. In previous work, we published a framework that fully captures the dynamic aspects of proof search with a saturation calculus. This framework covers clause splitting as supported by modern superposition provers with the help of a SAT solver. In particular, our formalization revealed some completeness issues with the theorem prover Vampire.

This year, we extended the Isabelle formalization by representations of the main loops of saturation-based theorem provers and their fairness conditions. In the process, we found and repaired several issues with the (in fact, our own) description of the Zipperposition loop, a novel loop that handles inferences producing an infinite stream of conclusions. In parallel, Martin Desharnais, for his PhD thesis, completed an instantiation of this framework for the superposition calculus. We also made progress on the Isabelle/HOL mechanization of the framework with clause splitting. Ghilain Bergeron contributed to this endeavor for his master thesis. This last piece of work is still ongoing, while the first one has been completed in 2022 and led to a paper presented at CADE in 2023 23.

Effective Symbolic Model Construction.

When automatic reasoning techniques are applied in order to prove a specific property of some system, a proof is a certificate of success and we have worked on explaining the gist of it 63. If a proof attempt fails, automatic reasoning techniques may still terminate and implicitly provide a representation of a (counter) model to the property of interest. We have worked on effective representations for such counter models providing insights into why the desired property does not hold. This way, either the system, the formalization of it or the property can be debugged 47, 27.

Certification of FOL ID cyclic proofs.

Cyclic induction is a powerful reasoning technique that consists in blocking the proof development of certain subgoals already encountered during the proof process. In the setting of first-order logic with inductive definitions and equality (FOL ID ), cyclic proofs can be built automatically by the Cyclist prover, but their implementations are error-prone and the human validation may be tedious. On the other hand, cyclic induction is not yet integrated into certifying proof environments that support first-order logic and inductive definitions, such as Isabelle and Coq.

We have proposed in 20 a general procedure for certifying formula-based Noetherian induction reasoning in order to check FOL ID cyclic proofs using Coq. The output is a Coq script proving several theorems and lemmas for which the deductive part translates the E-Cyclist proof steps without using proof reconstruction techniques. This approach also allows for finding errors in cyclic proofs in a very precise way, at the level of proof steps. We established a bridge between formula-based Noetherian induction and FOL ID cyclic induction, by identifying a class of pre-proofs certifiable by Coq when some ordering and derivability constraints are satisfied, such as those produced by the E-Cyclist prover. The advantages of our approach are threefold:

  • The certification of cyclic FOL ID proofs is mechanical. Coq can validate every single step from the E-Cyclist proofs, as well as the induction arguments; also, it helps to identify errors in a very precise way.
  • There is a great potential for automation. The methodology has already been used to automatically convert to Coq scripts implicit induction proofs 70.
  • Cyclic induction can be directly performed in Coq. A library of Coq functions is provided and can be reused to manage the induction part.
  • Proofs for TLA+.

    In her PhD work, Rosalie Defourné defined improved encodings of the non-temporal theory of TLA+ in the input languages of automated theorem provers for first-order and higher-order logic, including SMT solvers and Zipperposition. The SMT encoding relies on an axiomatization for the operators of TLA+ set theory, annotated by triggers for finding relevant instances of these axioms. The previously existing encoding heavily relied on rewriting input formulas in order to simplify them before submitting them to the backend solver. Verifying its soundness required to not only inspect the axioms given to the solver, but also understanding the preprocessing techniques that were applied to the input formulas. Besides, optimizing the set of rewrite rules was delicate because properties such as confluence and termination had to be maintained. In contrast, the soundness of the new encoding can be determined directly by verifying the background axioms. Adding triggers does not endanger soundness, and benchmarking the new backend over an extensive corpus of existing TLA+ proofs showed that it performs comparably or better than the old encoding. This work was presented in a conference publication 30 and in Rosalie Defourné’s PhD thesis 45, which was defended in November.

    Formal Methods for Developing and Analyzing Algorithms and Systems

    Contributions to Formal Methods of System Design

    Formal framework for developing safety-critical interactive systems

    F3FLUID is a unified formal framework for the development of safety-critical interactive systems. This framework is based on the Formal Language of User Interface Design (FLUID) defined in the FORMEDICIS ANR project (2017–2022) for expressing high-level system requirements for interactive systems. FLUID is specifically designed for handling concepts of safety-critical interactive systems, including domain knowledge. A FLUID model is used as a source model for the generation of several target models in different modeling languages to support the formal verification methods, such as theorem proving and model checking.

    The Event-B modeling language is used for checking functional behaviors, user interactions, safety properties, and domain properties. A FLUID model is transformed into an Event-B model, and then, the Rodin tool is used to check the internal consistency with respect to the given safety properties. In addition, an interactive cooperative objects (ICOs) model is derived from the Event-B model for animation, visualization and validation of dynamic behaviors, visual properties, and task analysis, using the ProB model checker. Finally, an industrial case study, complying with the ARINC 661 standard, Multi-Purpose Interactive Applications (MPIA), is used to illustrate the effectiveness of our F3FLUID framework for the development of safety-critical interactive systems. Moreover, we show 18 how formal ontologies can be used to model domain-specific knowledge, as well as how system models may refer to these ontologies through annotations. It relies on the Event-B refinement and proof state-based method, and the associated theories, to define a framework in which domain-specific knowledge ontologies are formalized as Event-B theories defining data types used to type Event-B system design models. Finally, this framework is deployed for the specific case of interactive critical systems. To illustrate the proposed approach, a case study of the Traffic Collision Avoidance System (TCAS) is developed.

    A Secure Low-Latency Protocol for Mitigating High Computation Overhead in WIFI Networks

    The increase in popularity of wireless networks in industrial, embedded, medical and public sectors has made them an appealing attack surface for attackers who exploit the vulnerabilities in network protocols to launch attacks such as Evil Twin, Man-in-the-middle, sniffing, etc., which may result in economic and non-economic losses. To protect wireless networks against such attacks, IEEE 802.11 keep updating the protocol standards with new and more secure versions. There has always been a direct correlation between attacks and the improvement of protocol standards. As the sophistication of attacks increases, protocol standards tend to move towards higher security, resulting in a significant increase in both latency and computational overhead, and severe degradation in the performance of low-latency applications such as Industrial Internet of Things (IIoT), automotive, robotics, etc. The paper 16 highlights the importance of both latency and security in wireless networks from the implementation and performance perspectives. We review existing IEEE 802.11 protocols in terms of security offered and overhead incurred to substantiate the fact that there is a need for a protocol which in addition to providing optimum security against attacks also maintains the latency and overhead. We also propose a secure and low-latency protocol known as Secure Authentication Protocol (SAP) which operates in two phases, where the first phase is a one-time process implemented using asymmetric cryptography and the second phase is implemented using symmetric cryptography. The protocol is structured in a way that it maintains the original structure of IEEE 802.11 protocols and performs both phases using fewer messages than existing protocols. By simulating the protocol using the well-established OMNeT++ simulator, we demonstrated that the proposed protocol incurs a low computation overhead, making it ideal for low-latency applications. We extensively verified the security properties of the proposed protocol using formal verification through the widely-accepted Scyther tool. Finally, we perform a comparative analysis of SAP with existing IEEE 802.11 wireless network protocols to highlight the improvement.

    Modeling hybrid systems by refinement.

    Whenever continuous dynamics and discrete control interact, hybrid systems arise. As hybrid systems become ubiquitous and more and more complex, analysis and synthesis techniques are in high demand to design safe hybrid systems. This is however challenging due to the nature of hybrid systems and their designs, and the question arises of how to formulate and reason about their safety problems. Previous work has demonstrated how to extend the discrete modeling language Event-B with support for continuous domains to integrate traditional refinement in hybrid system design. We have extended our strategy 50 that can coherently refine an abstract hybrid system design with safety constraints down to a concrete one, integrated with implementable discrete control, that can behave safely. We demonstrate our proposal on a smart heating system that regulates room temperature between two reference values. The main innovation lies in the fact that we combine the Event-B modelling language (events) and the B modelling language (operations). The transformation of events into operation is proved correct using a proof assistant. We also explore the frequency model approach of control theory using the Event-B modelling language. The frequency modelling style is not so usual and we provided a Python artefact for assisting with the use of the approach.

    Semantics transformations.

    Any given programming language may come with several semantics definitions, such as big-step, small-step, or even abstract machines, which can be seen as an implementation of a language. They all describe identical behaviors of programs, but each may be better adapted for some purpose: for instance, small-step semantics are better suited to prove subject reduction.

    To have access to all kinds of semantics at once, we develop transformations between semantics to be able to generate one from the other at no extra cost for the language designer. In a previous work  59, we developed a semantics format called zipper semantics, a small-step semantics from which we can automatically derive sound and complete abstract machines for non-deterministic languages such as process calculi. We build on this work in two ways: first, we propose a new format of zipper semantics, called leaf-first, in which we can express lazy scope extrusion or join patterns. Next, we make the non-deterministic abstract machines more efficient by reusing some design principles of abstract machines for the lambda-calculus, like environments and refocusing.

    Automated Reasoning Techniques for Verification

    Synthesis of inductive invariants for distributed algorithms.

    In joint work with Aman Goel (Amazon) and Karem Sakallah (University of Michigan), we investigated the use of symbolic model checking techniques for automatically computing inductive invariants for parameterized distributed algorithms. Specifically, the IC3PO model checker applies the well-known IC3 model checking algorithm to finite instances of the algorithm and, in case of success, retrieves inductive invariants for those instances. It then inspects these invariants for symmetries with respect to space (processes) and time (e.g., ballot numbers used during the algorithm) and expresses those symmetries by introducing quantifiers. The resulting formulas are then checked for larger input sizes until no new invariants are inferred. We applied the technique to two versions of the well-known Bakery algorithm that ensures mutual exclusion among N processes communicating via shared variables. For both versions, IC3PO generated invariants that were remarkably similar to, but more permissive than, human-written invariants used in a previous interactive proof of the algorithm. This work, presented at FORTE 2023 34, suggests that automated invariant inference is becoming a viable alternative to labor-intensive human-written proofs. In ongoing work, we are investigating invariant synthesis for the significantly more complex Raft consensus algorithm. One of the main problems there is how to represent the sequential log maintained by the nodes in a first-order logic of limited expressiveness.

    New primitives in PlusCal for modeling distributed algorithms.

    We designed an extension of the PlusCal algorithmic language for modeling distributed algorithms. Rather than introducing many new features that could break the design objectives of PlusCal being a lightweight front-end to writing TLA+specifications, we added only a few orthogonal concepts inspired from those found in distributed programming languages while both remaining compatible with the existing language and keeping simple the generation of human-readable TLA+ specifications. Compared to the original PlusCal language, Distributed PlusCal allows processes to consist of multiple threads that communicate via process-local variables, and it introduces communication channels that can be declared as preserving FIFO order or not. We illustrated Distributed PlusCal using two well-known algorithms and our preliminary findings indicate that the extensions provided help us express distributed algorithms in a natural way. Moreover, any overhead incurred in verification with respect to a specification written in TLA+ is not different from that of ordinary PlusCal. This work was presented in a conference paper 29.

    Validating traces of distributed systems.

    Programming distributed systems is error-prone, and while formal methods such as TLA+ can help design such systems, the translation from the verified specification towards the final implementation is rarely certified. Nevertheless, we believe that we can still gain confidence in the fact that an implementation behaves according to a high-level specification by collecting traces of executions and checking that they are allowed by the specification. We have developed a framework for the instrumentation of Java programs allowing one to record events and variable updates that are meaningful at the level of the TLA+ specification, and we use the TLA+ model checker to verify that the recorded trace of an execution indeed corresponds to a possible execution of the high-level specification4. Although this cannot formally establish the correctness of an implementation, preliminary experience with this approach indicates that it can be surprisingly effective at discovering discrepancies between the TLA+ specification and the implementation.

    Model checking for timed and linear systems

    Opacity of timed automata.

    Opacity of a system is a property describing the impossibility for an external attacker to deduce some private information by observing an execution of the system. In two publications 22, 11 we considered the opacity of systems modeled by timed automata where an attacker has access to the duration of the executions only. Our goal was to determine whether a system was opaque, and in some cases, under which conditions it could be made opaque. More precisely, the models we considered allowed for parameters representing unknown delays within the system, and we wished to identify for which parameter values opacity was achieved. Throughout this line of research we have considered many different variants of the notion of opacity, for instance by allowing the secret behaviour to expire, meaning that some private behaviour could stop being deemed private if the private information was not detected by the observer quickly enough. The paper 11 serves as a survey of the current state of this line of research.

    Mathematical tools for the analysis of system.

    For the analysis of timed automata, including the analysis of opacity mentioned above, the notion of semi-linear sets (i.e. sets that can be defined by Presburger arithmetics) appears regularly. For instance, the durations of runs in a timed automata can be described as such a set. Presburger arithmetics however fails once parameters are included within the automata: the presence of parameters often leads to multiplications between integer variables, which cannot be represented in Presburger arithmetic. Peano arithmetic could of course represent those sets, but this arithmetic is well known to be undecidable and thus not a real option. Much previous research has studied the gap between Presburger and Peano arithmetic. To our knowledge, none of the existing work was able to handle the formulas that we generate in the analysis of parametric timed automata. In 51, we developed a parametric extension to Presburger arithmetic tailored for our needs. It shows how to handle some formulas beyond Presburger which we plan to use in future work.

    Model checking of rounded linear loops.

    Loops are a fundamental staple of any programming language, and the study of loops plays a pivotal role in many subfields of computer science, including automated verification, abstract interpretation, program analysis, semantics, etc. A lot of work has been done in the linear dynamical systems community to represent program loops as simple linear systems, and to analyze which kind of properties formulated in Monadic Second Order Logic could be checked over the evolution of a loop. In 35, we modify the usual linear models used to account for the fact that computers rely on floating points. This change leads to a completely different approach to the problem, as the usual tools to tackle linear dynamical systems fail to apply. An interesting result is that, while termination of simple linear loops is a long-standing open problem in the usual setting, it becomes undecidable with floating points. Moreover, if we add some very simple restriction on the system, which changes nothing in the original setting, termination in our setting becomes decidable.

    Verification and Analysis of Dynamic Properties of Biological Systems

    Approximate Conservation Laws

    Section 3.3 showed an example for the analysis of the kinetics of a chemical reaction network at multiple timescales 64. Although general in its implementation, this reduction method can fail in a number of cases. A major cause of failure is the degeneracy of the quasi-steady state, when the fast dynamics has a continuous variety of steady states. Typically, this happens when the fast truncated ODEs have first integrals, i.e. quantities that are conserved on any trajectory and that have to be fixed at arbitrary values depending on the initial conditions. The quasi-steady states are then no longer hyperbolic, because the Jacobian matrix of the fast part of the dynamics becomes singular. To address this issue, we have now proposed a concept of approximate conservation laws, which allows additional reductions.

    Technically, this framework requires parametric versions of various established algorithms from Symbolic Computation. One simple example is the computation of the rank of a matrix with real parameters, which produces a formal finite case distinction where possible ranks are paired with necessary and sufficient conditions as Tarski formulas. This allows to identify critical cases with respect to the above-mentioned singularity of the Jacobian. Another example is the use of comprehensive Gröbner bases in the course of parametric computation of certain syzygy modules. From a practical point of view, a central issue with all such algorithms is the combinatorial explosion of the number of cases.

    We use SMT solving as well as real quantifier elimination methods to detect inconsistent cases and prune the tree of case distinctions early. The decision procedures used are typically double exponential and can easily turn into a bottleneck preventing termination within reasonable time altogether, in particular when the degrees of polynomial terms get larger. Since the results remain correct also without the elimination of some redundant cases, we combine various methods and use suitable timeouts. This work has resulted in two articles, which have been accepted for publication in the SIAM Journal on Applied Dynamical Systems

    Comments are closed.