Research


Overall objectives

The VeriDis project team includes members of the MOSEL group at LORIA, the computer science laboratory in Nancy, and members of the research group Automation of Logic at Max-Planck-Institut für Informatik in Saarbrücken. It is headed by Stephan Merz and Christoph Weidenbach. VeriDis was created in 2010 as a local research group of Inria Nancy – Grand Est and has been an Inria project team since July 2012.

The objectives of VeriDis are to contribute to advances in verification techniques, including automated and interactive theorem proving, and to make them available for the development and analysis of concurrent and distributed algorithms and systems, based on mathematically precise and practically applicable development methods. The techniques that we develop are intended to assist designers of algorithms and systems in carrying out formally verified developments, where proofs of relevant properties, as well as bugs, can be found with a high degree of automation.

Within this context, we work on techniques for automated theorem proving for expressive languages based on first-order logic, with support for theories (fragments of arithmetic, set theory etc.) that are relevant for specifying algorithms and systems. Ideally, systems and their properties would be specified using high-level, expressive languages, errors in specifications would be discovered automatically, and finally, full verification could also be performed completely automatically. Due to the fundamental undecidability of the problem, this cannot be achieved in general. Nevertheless, we have observed important advances in automated deduction in recent years, to which we have contributed. These advances suggest that a substantially higher degree of automation can be achieved over what is available in today’s tools supporting deductive verification. Our techniques are developed within SMT (satisfiability modulo theories) solving and superposition reasoning, the two main frameworks of contemporary automated reasoning that have complementary strengths and weaknesses, and we are interested in making them converge when appropriate. Techniques developed within the symbolic computation domain, such as algorithms for quantifier elimination for appropriate theories, are also relevant, and we are working on integrating them into our portfolio of techniques. In order to handle expressive input languages, we are working on techniques that encompass tractable fragments of higher-order logic, for example for specifying inductive or co-inductive data types, for automating proofs by induction, or for handling collections defined through a characteristic predicate.

Since full automatic verification remains elusive, another line of our research targets interactive proof platforms. We intend these platforms to benefit from our work on automated deduction by incorporating powerful automated backends and thus raise the degree of automation beyond what current proof assistants can offer. Since most conjectures stated by users are initially wrong (due to type errors, omitted hypotheses or overlooked border cases), it is also important that proof assistants be able to detect and explain such errors rather than letting users waste considerable time in futile proof attempts. Moreover, increased automation must not come at the expense of trustworthiness: skeptical proof assistants expect to be given an explanation of the proof found by the backend prover that they can certify.

Model checking is also an established and highly successful technique for verifying systems and for finding errors. Our contributions in this area more specifically target quantitative, in particular timed or probabilistic systems. A specificity of VeriDis is notably to consider partially specified systems, using parameters, in which case the verification problem becomes the synthesis of suitable parameter valuations.

Our methodological and foundational research is accompanied by the development of efficient software tools, several of which go beyond pure research prototypes: they have been used by others, have been integrated in verification platforms developed by other groups, and participate in international competitions. We also validate our work on verification techniques by applying them to the formal development of algorithms and systems. We mainly target high-level descriptions of concurrent and distributed algorithms and systems. This class of algorithms is by now ubiquitous, ranging from multi- and many-core algorithms to large networks and cloud computing, and their formal verification is notoriously difficult. Targeting high levels of abstraction allows the designs of such systems to be verified before an actual implementation has been developed, contributing to reducing the costs of formal verification. The potential of distributed systems for increased resilience to component failures makes them attractive in many contexts, but also makes formal verification even more important and challenging. Our work in this area aims at identifying classes of algorithms and systems for which we can provide guidelines and identify patterns of formal development that makes verification less an art and more an engineering discipline. We mainly target components of operating systems, distributed and cloud services, and networks of computers or mobile devices.

Beyond formal system verification, we pursue applications of some of the symbolic techniques that we develop in other domains. We have observed encouraging success in using techniques of symbolic computation for the qualitative analysis of biological and chemical networks described by systems of ordinary differential equations that were previously only accessible to large-scale simulation. Such networks include biological reaction networks as they occur with models for diseases such as diabetes or cancer. They furthermore include epidemic models such as variants and generalizations of SEIR1 models, which are typically used for Influenza A or Covid-19. This work is being pursued within a large-scale interdisciplinary collaboration. It aims for our work grounded in verification to have an impact on the sciences, beyond engineering, which will feed back into our core formal methods community.

Last activity report : 2021

Results

New results

Automated and Interactive Theorem Proving

Contributions to SMT Techniques

Quantifier Handling in First-Order SMT.

Designing techniques for handling quantifiers in SMT has always been an important objective of the team.

Quantifier reasoning in SMT solvers relies on instantiation: ground instances are generated heuristically from the quantified formulas until a contradiction is reached at the ground level. Previous instantiation heuristics, however, often fail in the presence of nested quantifiers. To address this issue we introduced a unification-based method that augments the problem with shallow quantified formulas obtained from assertions with nested quantifiers. These new formulas help unlock the regular instantiation techniques, but parsimony is necessary since they might also be misguiding. To mitigate this, we identified some effective restricting conditions. The method has been implemented in the veriT solver, and tested on benchmarks from the SMT-LIB. It allowed the solver to prove more formulas, faster. This was published at FroCoS 2021, and the paper received the award for the best student paper 35.

Quantifier Handling in Higher-Order SMT.

Joint work with Haniel Barbosa (Univ. Feder. de Miras Gerais, Brazil).

SMT solvers have throughout the years been able to cope with increasingly expressive logics, from ground formulas to full first-order logic (FOL). In the past, we proposed a pragmatic extension for SMT solvers to support higher-order logic reasoning natively without compromising performance on FOL reasoning, thus leveraging the extensive research and implementation efforts dedicated to efficient SMT solving. However, the higher-order SMT solvers resulting from this work are not as effective as we would expect given their performances in first-order logic. We believe this comes from the fact that only the core of the SMT solver has been extended, ignoring in particular the modules for quantifier instantiation.

This motivated us to start working on an extension of the main quantifier-instantiation approach (congruence closure with free variables, CCFV) to higher-order logic in 2020. This work is still ongoing. We are working on an encoding of the CCFV higher-order problem into a set of SAT constraints. In 2020, we concentrated our efforts mainly on the theory, to prove the soundness and completeness of our approach. This year, as a first step towards an implementation, we designed precise pseudocode for all elements of CCFV computation.

Proofs for SMT.

We previously developed a framework for processing formulas in automatic theorem provers, with generation of detailed proofs that can be checked by external tools, including skeptical proof assistants. The main components are a generic contextual recursion algorithm and an extensible set of inference rules. Clausification, Skolemization, theory-specific simplifications, and expansion of `let’ expressions are instances of this framework. With suitable data structures, proof generation adds only a linear-time overhead, and proofs can be checked in linear time. We implemented the approach in the SMT solver veriT. This allowed us to dramatically simplify the code base while increasing the number of problems for which detailed proofs can be produced. Our publication at CADE 47 demonstrates the excellent results of our approach, building on our previous work on proof formats for SMT and on proof reconstruction within the proof assistant Isabelle/HOL (e.g., 81). Our proof format was moreover the basis for the standard Alethe format 46, which is now getting adopted by the community.

Automated reasoning techniques beyond SMT

Extensions of a formal framework for automated reasoning.

We are part of a group developing a framework for formal refutational completeness proofs of abstract provers that implement automated reasoning calculi, especially calculi based on saturation such as ordered resolution and superposition.

Last year, we published a framework that fully captures the dynamic aspects of proof search with a saturation calculus. This year, we extended this work in two directions. First, we finished a mechanization of the framework in Isabelle/HOL, including a case study. This research was presented at CPP 2021 50. Second, we extended the work to support clause splitting as supported by superposition provers such as SPASS and Vampire. These provers use a SAT solver (either built-in or off-the-shelf) to explore the search space more efficiently. This extension of the framework was highly nontrivial and revealed some completeness issues with the theorem prover Vampire. This work was presented at CADE 2021 33.

During his master internship, Qi Qiu extended the Isabelle formalization by representations of the main loops of saturation-based theorem provers.

Superposition for full higher-order logic.

In previous work, we designed superposition calculi for two fragments of higher-order logic as stepping stones towards full higher-order logic. We have now designed two more superposition calculi: one to handle native Booleans in first-order logic as well as one for full higher-order logic that builds on all the others, and includes partial application (currying), anonymous functions ( λ -expressions), and native Booleans. The proof system works on βη -equivalence classes of λ -terms and relies on higher-order unification to achieve refutational completeness for Henkin semantics.

We implemented the calculus in the Zipperposition prover. This implementation helped us win the first-place trophy at the CADE ATP System Competition (CASC), ahead of Vampire, in the 2021 edition of the competition. Our own empirical evaluation includes benchmarks from the TPTP (Thousands of Problems for Theorem Provers) and interactive verification problems exported from Isabelle/HOL. The results appear promising and suggest that an optimized implementation inside a competitive prover such as E, SPASS, or Vampire would outperform existing higher-order automatic provers. This research was presented at the CADE 2021 conference 27, 43, 51. The last paper won the best student paper award at the conference.

Relevance of clauses for resolution.

A clause is relevant for a refutation with respect to an unsatisfiable clause set if it occurs in all refutation proofs. It is semi-relevant if it occurs in at least one refutation proof. We have shown that for some clause C the question whether it is semi-relevant can be reduced to the question whether there exists a set-of-support (SOS) refutation whose set of support is the singleton {C}  37. To this end we generalized and finalized the well-known completeness result on SOS resolution 83: SOS resolution is complete if and only if there exists a resolution refutation with one of the clauses out of the SOS 37. The notion of semi-relevance is in particular useful to test the contribution of a clause or formula to a specific consequence.

Well-founded cyclic proofs.

In the past few years, cyclic proofs have been witnessed to be natural and useful tools for dealing with fixpoint logics (logics for reasoning about induction and co-induction). Cyclic proofs are currently considered as being non-wellfounded, mainly because they are viewed as finite/regular representations of (a subclass of) infinite proofs. In spite of this belief, the soundness of some of them can be expressed using well-founded arguments. For example, in the context of first-order logic with inductive definitions, the sequent-based proofs built with the CLKID ω cyclic induction proof system can also be validated using Noetherian (well-founded) induction arguments; the induction ordering is the underlying semantic ordering used to show some global trace condition, mainly ensuring that the steps along the infinite paths from cyclic derivations of false sequents are decreasing. This provides a bridge with the state-of-the-art (formula-based) Noetherian induction reasoning. A paper was published at SCSS 2021 49, and we expect that proof techniques specific to this domain make cyclic reasoning more effective.

Abduction for Description Logics.

Abduction is the process of explaining new observations using background knowledge. It is central to knowledge discovery and knowledge processing and has been intensely studied in various domains such as artificial intelligence, philosophy and logic. In the description logic literature, abduction has received little attention, despite being recognised as important for ontology repair, query update and matchmaking.

As part of his PhD, Fajar Haifani develops a technique for abduction in the lightweight description logic ℰℒ , that specializes in representing subset inclusions and membership. His approach consists in translating the problem to first-order logic to harness the power of the automated deduction tool SPASS to produce prime implicates, i.e., most general consequences, from which the solutions of the abduction problem can be reconstructed. Theoretical results of this work have been presented at the SOQE and XLoKR workshops this year 36.

In a joint work with P. Koopmann, W. Del-Pinto and R. Schmidt, we are also working on an extended version of an earlier work on abduction in the expressive description logic 𝒜ℒ𝒞 82.

Proofs for TLA+.

The logic of TLA+ mixes first-order and modal reasoning. In particular, the predicate enabledA is true of a state s if there exists a state t such that A is true over the pair (s,t) . This predicate is at the basis of reasoning about fairness conditions. We designed methods for reasoning about enabled and implemented them in the TLA+ proof system TLAPS. The most elementary technique consists in replacing the enabled operator with existential quantification over all primed state variables. In order to achieve better automation, we also implemented rules that reflect the monotonicity of enabled with respect to implication, as well as a rewrite system that pushes the enabled operator inward in complex formulas and simplifies the resulting proof obligations. These techniques have been validated using several case studies in formal proof, and they allow us for the first time to mechanically prove liveness properties of TLA+ specifications.

In his PhD work, Antoine Defourné investigates encodings of the non-temporal theory of TLA+ in the input languages of automated theorem provers for first-order and higher-order logic, including SMT solvers and Zipperposition. Preliminary results appeared in a paper published at FroCos 2021 32. The new encodings were applied to TLAPS proofs that establish mutual exclusion for the “deconstructed” Bakery algorithm introduced by Lamport 84, as well as refinement of this algorithm by the distributed state machine from 87. These proofs are available online, and the new encodings led to a significant improvement in the degree of automation.

Verification of an algorithm for computing strongly connected components.

In the course of his research project for École des Mines de Nancy, Vincent Trélat formalizes in Isabelle/HOL an algorithm for computing strongly connected components in a graph presented in Vincent Bloemen’s PhD thesis 77 and originally due to Dijkstra. After showing the correctness of the sequential version of the algorithm, the objective is to verify data structures underlying a concurrent implementation.

Formal Methods for Developing and Analyzing Algorithms and Systems

Contributions to Formal Methods of System Design

Simpler Rules for Auxiliary Variables.

Refinement of a specification expressed at a high level of abstraction by a lower-level specification is a fundamental concept in formal system development. A key problem in proving refinement is to demonstrate that suitable values of internal variables of the high-level specification can be assigned to every possible execution of the low-level specification. The standard technique for doing so is to exhibit a refinement mapping where values for these variables are computed for each state, but it is also well known that this technique is incomplete. In joint work with Leslie Lamport (Microsoft Research), we revisit the classic paper 68 that introduced constructions for auxiliary variables in order to strengthen the refinement mapping technique. In particular, we introduce simpler rules for defining prophecy variables and demonstrate how they can be used for proving the correctness of an algorithm implementing a linearizable object. We also show that our constructions of auxiliary variables yield a complete proof method. An article based on this work has been accepted for publication at ACM Transactions on Programming Languages and Systems and will appear in 2022.

Formal Analysis of Critical Interactive Systems.

When interactive systems allow users to interact with critical systems, they are qualified as Critical Interactive Systems. Their design requires the support of different activities and tasks to achieve user goals. Examples of such systems are cockpits, control panels of nuclear plants, medical devices, etc. Such critical systems are very difficult to model due to the complexity of the offered interaction capabilities. We present 20 a formal development methodology for designing interactive applications using a correct-by-construction approach. We propose a refinement strategy based on the model-view-controller (MVC) paradigm to structure and design Event-B formal models of the interactive application. The proposed MVC-based refinement strategy facilitates the development of an abstract model and a series of refined models by introducing the possible modes, controller behaviour and visual components of the interactive application while preserving the required interaction-related safety properties. To demonstrate the effectiveness, scalability, reliability and feasibility of our approach, we use a small example from the automotive domain and real-life industrial case studies from aviation. The entire development is realized in Event-B, and the Rodin tool is used to analyze and verify the correctness of the formalized model.

Integration of Knowledge in Formal Development

System engineering development processes rely on modeling activities that lead to different design models 54 corresponding to different analyses of the system under consideration. Domain engineering 55 plays a central role in the explicitation of domain-related properties. We have finalized a collection 58 of results related to ontologies 57 as well as to the domain of interactive systems. Checking the conformance of a system design to a standard is a central activity in the system engineering life cycle, a fortiori when the system is deemed critical. It ensures that a system or a model of a system faithfully meets the requirements of a specification of a standard, improving the robustness and trustworthiness of the system model. We present 40, 39 a formal framework based on the correct-by-construction Event-B method and related theories for formally checking the conformance of a formal system model to a formalized standard specification by construction. This framework facilitates the formalization of concepts and rules from a standard in the form of an ontology, as well as the formalization of an engineering domain, using an Event-B theory consisting of data types and a collection of operators and properties. Conformance checking is accomplished by annotating the system model with typing conditions. We address an industrial case study borrowed from the aircraft cockpit engineering domain to demonstrate the feasibility and strengths of our approach. The ARINC 661 standard is formalized as an Event-B theory. This theory formally models and annotates the safety-critical real-world application of a weather radar system for certification purposes.

Modeling hybrid systems by refinement.

Whenever continuous dynamics and discrete control interact, hybrid systems arise. As hybrid systems become ubiquitous and more and more complex, analysis and synthesis techniques are in high demand to design safe hybrid systems. This is however challenging due to the nature of hybrid systems and their designs, and the question of how to formulate and reason about their safety problems. Previous work has demonstrated how to extend the discrete modeling language Event-B with support for continuous domains to integrate traditional refinement in hybrid system design. We now propose a strategy 30 that can coherently refine an abstract hybrid system design with safety constraints down to a concrete one, integrated with implementable discrete control, that can behave safely. We demonstrate our proposal on a smart heating system that regulates room temperature between two references.

Certified Semantics Transformations

Any given programming language may come with several semantics definitions, such as big-step, small-step, or even abstract machines, which can be seen as an implementation of a language. They all describe identical behaviors of programs, but each may be better adapted for some purpose: for instance, small-step semantics are better suited to prove subject reduction.

To have access to all kinds of semantics at once, we develop transformations between semantics to be able to generate one from the other at no extra cost for the language designer. We propose a transformation from big-step to small-step semantics and certify its correctness using Coq certificates: for a given input language in big-step, we generate the small-step semantics and a Coq proof script that shows the correspondence between the two semantics. We also develop a certified transformation from big-step to abstract machines 22. Finally, we generate abstract machines in a generic and complete way for non-deterministic languages such as process calculi, for which only ad hoc and partial implementations existed so far.

An Extension of PlusCal for Distributed Algorithms.

In previous work 70, we extended the algorithmic language PlusCal 86 by constructs intended for describing distributed algorithms. In his master internship, Dostonbek Matyakubov consolidated the translator for this PlusCal extension to TLA+ specifications.

Automated Reasoning Techniques for Verification

Static analysis of rewriting systems.

Rewriting is a widely established formalism that is especially well suited for describing program semantics and transformations. In particular, constructor-based term rewriting systems are generally used to illustrate the behaviour of functional language programs. In the context of formal verification, it is often necessary to characterize the shape of the reachable terms of such rewrite systems and, in particular, when performing (program) transformations we often want to eliminate some symbols and, more generally, to ensure that some patterns are absent from the result of the transformation.

We have proposed a method to statically analyse constructor term rewriting systems and to verify the absence of patterns from the corresponding normal forms 31. The approach is non-intrusive and avoids the burden of specifying a specific language to characterize the result of the transformation as the user is simply requested to indicate, for the corresponding functions, the patterns that should be eliminated and the respective pre-conditions for the arguments of the function. If the analysed rewriting system features non-linear right-hand sides, false negatives could be obtained but when the system is confluent, as is the case for deterministic functional programs, and if a strict reduction strategy is used, the method handles also some form of non-linear right-hand sides. The method has been implemented in Haskell and the results in terms of expressiveness and efficiency are very encouraging.

Towards Mechanization and Application of SUPERLOG.

In joint work with the groups of Markus Kroetzsch and Christof Fetzer (Technical University of Dresden), we have introduced a logical fragment called SUPERLOG (Supervisor Logic) that is meant to provide a basis for formalizing abstract control algorithms found in ECUs (Electronical Control Units). The language comes with support for fully automated verification and also for execution 34. Technically, SUPERLOG is an extension of the (Horn) first-order Bernays-Schoenfinkel fragment with arithmetic constraints. It extends the well known SMT fragment by universally quantified variables. In addition to the already developed sound and complete calculus for the SUPERLOG language 78, we have now developed a Datalog hammer: a procedure that reduces universally as well as existentially quantified queries to plain Datalog 28. It outperforms any available state-of-the art technique on SUPERLOG formalizations. The theory is based on the decidability results obtained by Marco Voigt 21.

Parametric timed model checking

Theoretical questions.

In 12, we studied the power of updates in parametric timed automata: we showed that, by adding some restrictions compared to the original model, we can also significantly enhance the syntax (by allowing “updates to parameters”) while ensuring that a crucial problem (the emptiness of the valuation set reaching a given discrete location) remains decidable.

Heuristics and efficient synthesis.

In 23, we proposed new algorithms to synthesize valuations yielding at least one infinite accepting run in a parametric timed automaton. This is important for parametric timed model checking, since the violation of a property (expressed using some logics) can reduce to the existence of such an infinite accepting run.

In 26, we formalized and published (using the GNU GPL license) a library of benchmarks for parametric timed systems, with the ultimate goal to use it in further works studying the efficiency of synthesis algorithms.

Application to security properties.

In 13, we targeted the formalization of attack-fault trees, and proposed a method to formally derive parameter valuations for which an attack-fault tree (involving quantitative constants such as time and costs) is safe or, on the other hand, makes an attack possible.

Application to real-time systems.

In 11, we modeled and verified the system of a flight control launcher from ArianeGroup. Using the formalism of parametric timed automata and the IMITATOR model checker 24, we notably derived safe timing parameter valuations ensuring not only the functional correctness, but also some tight constraints on the tasks and their sequential behavior.

Monitoring of hybrid systems.

Finally, we considered monitoring of hybrid systems: while not strictly speaking model checking, monitoring can provide designers with formal guarantees on some concrete system executions. In 53, we proposed a new technique, where the monitoring algorithm takes advantage of a “bounding model” expressed using hybrid automata, which acts as a light overapproximation of the model. As a consequence, our monitoring algorithm can discard false positives, and provides designers with more accurate guarantees, while allowing for very expressive specifications. Our algorithm was implemented in a toolkit, and it is scalable.

Verification and Analysis of Dynamic Properties of Biological Systems

Several major research articles on Real Singularities of Implicit Ordinary Differential Equations, on Reduction of Reaction Network Kinetics to Multiple Timescales, and on Geometric Analysis of Steady State Regimes have been published in scientic journals during the reporting period 19, 18, 16. A discussion of that research is available in last year’s report.

Parametric Geometric Analysis of Steady State Regimes.

During the last decades there has been considerable research on “toricity” of various algebraic structures 79, 91. In that general context, it is natural that “toricity” of steady state regimes of ordinary differential equations with polynomial vector fields that describe the kinetics of reaction networks stands for binomiality of the steady state ideal, which in turn corresponds to the steady state variety over the complex numbers. In our foundational, non-parametric, work on Geometric Analysis of Steady State Regimes we introduce an alternative concept of toricity over the real numbers 16. Our real toricity refers directly to the geometric shape of the real variety itself. We argue that our notion of toricity is more adequate than the traditional complex one from a biological point of view. We give detailed algorithms for both the real and the complex approach, along with prototypical implementations, and demonstrate on the grounds of systematic benchmarks on a large set of models from the biomodels.net database 88 that the performance of the real approach does not at all fall behind that of the complex one. Technically, our complex algorithms use Gröbner basis techniques, and our real algorithms use real decision procedures such as SMT solving over QF_NRA or real quantifier elimination procedures.

As the next natural step, we investigated the same problems with parametric reaction rates. This is well motivated, as reaction rates are either measured with limited precision, or estimated often only by order of magnitude. Relevant biological findings should be robust under variations of those parameters; as Feinberg points out that in his excellent textbook: The network itself will be our object of study, not the network endowed with a particular set of rate constants80.

Our generalization over the complex numbers 45 requires the careful use of comprehensive Gröbner bases and corresponding techniques 92. Over the real numbers 44 the presence of parameters exceeds the SMT framework, and we make use of real quantifier elimination methods. We successfully analyze various biological models from the literature. In benchmark series with n -site phosphorylation networks we can (for n=5 ) process models with up to 54 species and 30 parameteric rate constants, which amounts to the elimination of 54 real quantifiers in an 84-dimensional space, arriving at concise scientifically interpretable conditions in the parameters.

Comments are closed.