Overall objectives
The VeriDis project team includes members of the Formal Methods department at LORIA, the computer science laboratory in Nancy, and members of the research group Automation of Logic at Max Planck Institut für Informatik in Saarbrücken. It is headed by Stephan Merz and Christoph Weidenbach. VeriDis was created in 2010 as a local research group of Inria Nancy – Grand Est and has been an Inria project team since July 2012.
The objectives of VeriDis are to contribute to advances in verification techniques, including automated and interactive theorem proving, and to make them available for the development and analysis of concurrent and distributed algorithms and systems, based on mathematically precise and practically applicable development methods. The techniques that we develop are intended to assist designers of algorithms and systems in carrying out formally verified developments, where proofs of relevant properties, as well as bugs, can be found with a high degree of automation.
Within this context, we work on techniques for automated theorem proving for expressive languages based on first-order logic, with support for theories (including fragments of arithmetic or of set theory) that are relevant for specifying algorithms and systems. Ideally, systems and their properties would be specified using high-level, expressive languages, errors in specifications would be discovered automatically, and finally, full verification could also be performed completely automatically. Due to the fundamental undecidability of the problem, this cannot be achieved in general. Nevertheless, we have observed important advances in automated deduction in recent years, to which we have contributed. These advances suggest that a substantially higher degree of automation can be achieved over what is available in today’s tools supporting deductive verification. Our techniques are developed within trail-based solving and saturation-based reasoning, the two main frameworks of contemporary automated reasoning, of which respectively SMT (Satisfiability Modulo Reasoning) and superposition are the current most prominent examples in first- and higher-order logic. These two frameworks have complementary strengths and weaknesses. Figuring out how and when to make them converge is part of our interests. Techniques developed within the symbolic computation domain, such as algorithms for quantifier elimination for appropriate theories, are also relevant, and are part of our portfolio of techniques. In order to handle expressive input languages, we are working on techniques that encompass tractable fragments of higher-order logic, for example for specifying inductive or co-inductive data types, for automating proofs by induction, or for handling collections defined through a characteristic predicate.
Since full automatic verification remains elusive, another line of our research targets interactive proof platforms. We intend these platforms to benefit from our work on automated deduction by incorporating powerful automated backends and thus raise the degree of automation beyond what current proof assistants can offer. Since most conjectures stated by users are initially wrong (due to type errors, omitted hypotheses or overlooked border cases), it is also important that proof assistants be able to detect and explain such errors rather than letting users waste considerable time in futile proof attempts. Moreover, increased automation must not come at the expense of trustworthiness: skeptical proof assistants expect to be given an explanation of the proof found by the backend prover that they can certify.
Model checking is an established and highly successful technique for verifying systems and for finding errors. Our contributions in this area more specifically target quantitative aspects, in particular the verification of timed or probabilistic systems. A specificity of VeriDis is to consider partially specified systems, using parameters, in which case the verification problem becomes the synthesis of suitable parameter valuations.
Our methodological and foundational research is accompanied by the development of efficient software tools, several of which go beyond pure research prototypes: they have been used by others or have been integrated in verification platforms developed by other groups. We also validate our work on verification techniques by applying them to the formal development of algorithms and systems. We mainly target high-level descriptions of concurrent and distributed algorithms and systems. This class of algorithms is ubiquitous, ranging from multi- and many-core algorithms to large networks and cloud computing, and their formal verification is notoriously difficult. Targeting high levels of abstraction allows the designs of such systems to be verified before an actual implementation has been developed, contributing to reducing the costs of formal verification. The potential of distributed systems for increased resilience to component failures makes them attractive in many contexts, but also makes formal verification even more important and challenging. Our work in this area aims at identifying classes of algorithms and systems for which we can provide guidelines and identify patterns of formal development that makes verification less an art and more an engineering discipline. We mainly target components of operating systems, distributed and cloud services, and networks of computers or mobile devices. When correctness properties have been formally verified for a high-level specification of an algorithm, the correctness of an implementation of an algorithm still remains to be checked, using techniques such as refinement proofs, code generation, testing or trace validation.
Beyond formal system verification, we pursue applications of some of the symbolic techniques that we develop in other domains. We have observed encouraging success in using techniques of symbolic computation for the qualitative analysis of biological and chemical networks described by systems of ordinary differential equations that were previously only accessible to large-scale simulation. Such networks include biological reaction networks as they occur with models for diseases such as diabetes or cancer. They furthermore include epidemic models such as variants and generalizations of SEIR1 models, which are typically used for Influenza A or Covid-19. In this way, we aim for our work grounded in verification to have an impact on the sciences, beyond engineering, which will feed back into our core formal methods community.
Last activity report : 2024
- 2024 : PDF – HTML
- 2023 : PDF – HTML
- 2022 : PDF – HTML
- 2021 : PDF – HTML
- 2020 : PDF – HTML
- 2019 : PDF – HTML
- 2018 : PDF – HTML
- 2017 : PDF – HTML
- 2016 : PDF – HTML
- 2015 : PDF – HTML
Results
New results
Automated and Interactive Theorem Proving
Techniques for Automated Deduction
Quantifier Handling in Higher-Order SMT.
SMT solvers have throughout the years been able to cope with increasingly expressive logics, from ground formulas to full first-order logic (FOL). In the past, we proposed a pragmatic extension for SMT solvers to support higher-order logic reasoning natively without compromising performance on FOL reasoning, thus leveraging the extensive research and implementation efforts dedicated to efficient SMT solving. However, the higher-order SMT solvers resulting from this work are not as effective as we would expect given their performances in first-order logic. We believe this comes from the fact that only the core of the SMT solver has been extended, ignoring in particular the modules for quantifier instantiation.
This motivated us to start working on an extension of the main quantifier-instantiation approach (congruence closure with free variables, CCFV) to higher-order logic in 2020. We are working on an encoding of the CCFV higher-order problem into a set of SAT constraints. In previous years, we concentrated our efforts on the theory, to prove the soundness and completeness of our approach, and developed pseudo-code for all elements of CCFV computation. In 2022 and 2023, these algorithms were implemented in a C++ library, and they were tested on benchmarks from the SMT-lib collection. We started to integrate this library within a new SMT framework. In 2024, most of the work has been focused on building the rest of the infrastructure of the framework. The library will eventually be released under an open-source permissive license.
Effective Symbolic Model Construction.
When automatic reasoning techniques are applied in order to prove a specific property of some system, a proof is a certificate of success. If a proof attempt fails, automatic reasoning techniques may still terminate and implicitly provide a representation of a (counter) model to the property of interest. We have worked on effective representations for such counter models providing insights into why the desired property does not hold. This way, either the system, the formalization of it or the property can be debugged. A prerequisite for such an analysis is the successful termination of the proof attempt. We are working on new techniques for termination in the presence of loops while preserving our capabilities in analyzing the resulting model representations.
SCL Clause Learning from Simple Models.
We have continued our work on the SCL family of automated reasoning calculi. Given a finite consistent set of first-order ground literals without equality, we have developed an algorithm that generates a complete first-order logic interpretation, i.e., an interpretation for all ground literals over the signature and not just those in the input set, that is also a model for the input set. The interpretation is represented by first-order linear literals. It can be effectively used to evaluate clauses. A particular application are SCL stuck states. The SCL (Simple Clause Learning) calculus always computes with respect to a finite number of ground literals. It then finds either a contradiction or a stuck state being a model with respect to the considered ground literals. Our algorithm builds a complete literal interpretation out of such a stuck state model that can then be used to evaluate the clause set. If all clauses are satisfied an overall model has been found. If it does not satisfy some clause, this information can be effectively explored to extend the scope of ground literals considered by SCL 24.
In case of first-order logic with equality, an efficient evaluation of clauses and literals with respect to a set of (ground) unit equations is required for SCL(EQ). To this end we have lifted the classical congruence closure algorithm to the case of equations with variables 46.
Destructive Equality Resolution.
Bachmair’s and Ganzinger’s abstract redundancy concept for the Superposition Calculus justifies almost all operations that are used in superposition provers to delete or simplify clauses, and thus to keep the clause set manageable. Typical examples are tautology deletion, subsumption deletion, and demodulation, and with a more refined definition of redundancy, joinability and connectedness can be covered as well. The notable exception is Destructive Equality Resolution, that is, the replacement of a clause
Guiding Word Equation Solving using Graph Neural Networks.
Reasoning within theories such as arithmetic, arrays, and algebraic data structures has become a key challenge in automated reasoning. To address this, Satisfiability Modulo Theories (SMT) solvers have been developed, offering efficient decision procedures for a wide range of theories, including string theory. Strings, as a fundamental data type in programming, are crucial for many domains like text processing, database management, and web applications. One of the simplest string constraints is the word equation, modeled as equations in a free semigroup. Although the word equation problem is decidable and lies within PSPACE, SMT solvers often struggle with proving the unsatisfiability of word equations.
We developed a new algorithm that leverages a Graph Neural Network (GNN)-guided approach for solving word equations, building upon the well-known Nielsen transformation for equation splitting. The algorithm iteratively rewrites the terms on both sides of an equation, generating a tree-like search space. The choice of path at each split point significantly impacts solving efficiency, motivating the use of GNNs to make informed decisions about splits. Split decisions are formulated as multi-classification tasks, and five distinct graph representations of word equations are introduced to capture their structural properties for GNN processing.
The algorithm is implemented in a solver named DragonLi. Experimental results on both artificial and real-world benchmarks demonstrate that DragonLi excels in solving satisfiable problems. For single-word equations, DragonLi significantly outperforms well-established string solvers, while for conjunctions of multiple word equations, it remains competitive with state-of-the-art solvers.
Decision procedures for fragments with arithmetic.
Arithmetic reasoning is an important aspect of automated deduction applied to verification. Fragments mixing arithmetic and uninterpreted symbols are often undecidable. We study the decidability frontier of such fragments. In particular, we are examining monadic first-order logic with order interpreted over the real numbers. This fragment is decidable, whereas adding the successor (or more precisely the “
Integration of automated and interactive theorem proving
Integration of Abduction in Isabelle/HOL.
Thanks to the creation of the associate team Carma (cf. section 9.1), Sophie Tourret and Haniel Barbosa have started to work on a new collaborative project. Its objective is the integration of CVC5’s abduction mechanism into the proof assistant Isabelle/HOL. Abduction is a logical operation that provides tentative explanations to statements in a given theory. For a proof assistant, it could be turned into a tool that suggests missing hypotheses or intermediary steps in proofs. CVC5 is currently the best tool for abduction modulo theories, and Isabelle/HOL is the proof assistant with the best automation. By combining both into a prototype tool, we want to identify robustly the bottlenecks of abduction and address them until abduction becomes useful in practice to help proof engineers.
Extensions of a Formal Framework for Automated Reasoning.
We are part of a group developing a framework for formal refutational completeness proofs of abstract provers that implement automated reasoning calculi, especially calculi based on saturation such as ordered resolution and superposition. In previous work, we formalized in Isabelle/HOL a framework that fully captures the dynamic aspects of proof search with a saturation calculus.
This year, we completed the formalization in Isabelle/HOL of the superposition calculus, a state-of-the-art technique for theorem proving in first-order logic and higher-order logic with equality. The calculus was formalized to be compatible with the framework. The results were presented at the conference ITP 2024 28.
Proof Translation from HOL Light to Isabelle/HOL.
We revived an existing technique for the automatic translation of proofs in the proof assistant HOL-Light to proofs in Isabelle/HOL with the help of Stéphane Glondu, an Inria engineer. Our target was a proof of the compactness of first-order logic. Whereas the update of the existing method went well (and was presented at the Isabelle Workshop 2024), we deemed the way the results are used in Isabelle/HOL to be unsatisfactory. We therefore worked in parallel on a manual translation of the existing HOL-Light proof with the help of Lawrence Paulson. This translation is complete and will soon be submitted to the Archive of Formal Proofs, the main repository for Isabelle/HOL proofs.
Reconstruction of SMT Proofs.
SMT solvers are widely considered as the tools of choice for program and system verification. Their combination with skeptical proof assistants requires proofs found by SMT solvers to be certified by the trusted kernel of proof assistants. The Alethe format represents a trace of an SMT proof, explaining why a set of SMT constraints is unsatisfiable. In this work, we describe how to interpret Alethe proof traces and generate corresponding proofs that are accepted by the Lambdapi proof checker, a foundational proof assistant based on dependent type theory and rewriting rules that is intended to serve as a pivot for exchanging proofs between interactive proof assistants. Our implementation currently handles the pure logic (UL) fragment of SMT-LIB; is based on the Carcara proof elaborator and proof checker in order to eliminate need for proof search required for handling some of the coarse-grained rules that exist in Alethe. Particular care has been taken to support efficient reconstruction of the hyper-resolution rule that accounts for the bulk of ground reasoning in Alethe. A preliminary description of our approach was published at the SMT workshop 2024 27, and an extended version of that paper has been submitted to a journal.
Towards a Verified Encoding of Proof Obligations.
The B and Event-B methods for formal system development are widely used for developing certified embedded systems, in particular in the railway domain. Their application generates a high number of proof obligations that must be discharged in order for a system development to be acceptable for certification authorities. Since a high degree of proof automation is essential in an industrial context, proofs are routinely delegated to automatic theorem provers, including SMT solvers. In a recent experiment conducted by the Clearsy company, among the roughly 77,000 proof obligations of a representative development project, 64% were proved automatically by the Atelier B tool suite, leaving 28,000 obligations to be proved by human interaction. The existing encoding of B proof obligations for SMT solvers 57 systematically reduces formulas to first-order logic, eliminating all constructs of set theory. A drawback of this approach is that it destroys the structure of formulas, in particular for constructs such as set comprehension that involve binders. In this work, we develop an alternative encoding that takes advantage of recent capabilities of SMT solvers to handle fragments of higher-order logic. Preliminary experiments suggest that this approach may result in better automation, which can be improved further by designing custom instantiation heuristics. We also aim for a high degree of confidence in the encoding and rely on a definition of the mathematical theory of the B language in the Lean proof assistant, together with a verified translation to SMT-LIB developed in Lean.
Deskolemization.
We present a general strategy that enables the translation of tableau proofs using different Skolemization rules into machine-checkable proofs. It is part of a framework that enables (i) instantiation of the strategy into algorithms for different sets of tableau rules (e.g., different logics) and (ii) easy soundness proof which relies on the local extensibility of user-defined rules. Furthermore, we propose an instantiation of this strategy for first-order tableaux that handles notably pre-inner Skolemization rules, which is, as far as the authors know, the first one in the literature. This deskolemization strategy has been implemented in the Goéland automated theorem prover, enabling an export of its proofs to Coq and Lambdapi. Finally, we have evaluated the algorithm performances for inner and pre-inner Skolemization rules through the certification of proofs from some categories of the TPTP library
An Extension of the TPTP Derivation Format for Sequent-Based Calculus.
Motivated by the transfer of proofs between proof systems, and in particular from first order automated theorem provers (ATPs) to interactive theorem provers (ITPs), we specify an extension of the TPTP derivation text format to describe proofs in first-order logic: SC-TPTP. To avoid multiplication of standards, our proposed format over-specifies the TPTP derivation format by focusing on sequent formalisms. By doing so, it provides a high level of detail, is faithful to mathematical tradition, and covers multiple existing tools and in particular tableaux-based strategies. We make use of this format to allow the Lisa proof assistant to query the Goéland automated theorem prover, and implement a library of tools able to parse, print and check SC-TPTP proofs, export them into Coq files, and rebuild low-level proof steps from more complex steps.
Formal Methods for Developing and Analyzing Algorithms and Systems
Contributions to Formal Methods of System Design
Effective Execution and Verification.
For certain first-order logic fragments combined with theories, effective symbolic execution and automated verification can be obtained. We propose a translation from eBPF (extended Berkeley Packet Filter) code to CHC (Constrained Horn Clause sets) over the combined theory of bitvectors and arrays. eBPF is in particular used in the Linux kernel where user code is executed under kernel privileges. In order to protect the kernel, a well-known verifier statically checks the code for any harm and a number of research efforts have been performed to secure and improve the performance of the verifier. Our translation procedure bpfverify is precise and covers almost all details of the eBPF language. Functional properties are automatically verified using z3. We prove termination of the procedure and show by real world eBPF code examples that full-fledged automatic verification is actually feasible 25.
Synthesis of inductive invariants for distributed algorithms.
We investigate the use of symbolic model checking techniques for automatically computing inductive invariants for parameterized distributed algorithms. Specifically, the IC3PO model checker applies the well-known IC3 model checking algorithm to finite instances of the algorithm and, in case of success, retrieves inductive invariants for those instances. It then inspects these invariants for symmetries with respect to space (processes) and time (e.g., ballot numbers used during the algorithm) and expresses those symmetries by introducing quantifiers. The resulting formulas are then checked for larger input sizes until no new invariants are inferred. In 2024, we worked on extending the technique for specifications that require more expressive logical languages than previously considered. Specifically, we worked on inferring inductive invariants for the well-known Raft consensus algorithm that requires reasoning over sequences. We have so far been able to infer inductive invariants establishing two of the three main correctness properties of Raft.
Verifying and testing graph-based models.
The Event-B formalism is mainly used for modelling reactive systems and requires techniques and tools for validation. In a joint work, we have experimented a new tool called Cyclone that can be used for verifying and testing graph-based models. In contrast to other verification tools, Cyclone provides users with a unique way of building models and specifying their properties, by explicitly constructing and reasoning about graphs. As graphs can naturally capture both static and dynamic behaviours of a system, this allows users to represent many challenging problems. In 38, we describe Cyclone’s basic features through different examples. Our preliminary evaluation results show that Cyclone has a promising potential in handling verification tasks from various domains and good usability for those who are not familiar with using verification tools.
Checking contracts in Event-B
The paper 36 reports on some lessons learnt from introducing the use of automated tools for verifying software-based systems such as Event-B and Rodin in higher education at the master’s level. The verification of program properties such as partial correctness or absence of errors at runtime is based on induction principles using algorithmic techniques for checking statements in a logical framework such as classical logic or temporal logic. Alan Turing was undoubtedly the first to annotate programs and to apply an induction principle to transition systems. Our work is placed in this perspective of verifying safety properties of programs that could be executed sequentially or in a distributed manner, with the aim of presenting them as simply as possible to student classes in the context of a posteriori verification. We report on an experiment using the Event-B language and associated tools as an assembly and disassembly platform for correcting programs in a programming language. We have adopted a contract-based approach to programming, which we are implementing with Event-B. A few examples are given to illustrate this pedagogical approach as well as comments and observations. This step is part of a process of learning both the underlying techniques and other tools such as Frama-C based on the same ideas.
Verified Code Generation from PlusCal Programs.
Specifications written in high-level languages such as TLA+ are useful for verifying correctness properties. They often result in state spaces that remain manageable for model checking, and they allow users to design inductive invariants that underlie deductive system verification. However, there is a substantial gap between high-level specifications of algorithms and implementations of those algorithms in actual programs. One way to avoid this gap is to translate specifications into code in a programming language that can be compiled and executed. In this work, we investigate the feasibility of such an approach for algorithms written in PlusCal, an algorithmic language that can be translated to TLA+ for verification. We define a series of PlusCal fragments together with semantics-preserving translations from one fragment to the next one, and then aim at implementing a code generator for the most restrictive fragment. In order to ensure the correctness of the approach, the semantics of the intermediate and the target languages is defined in the Lean proof assistant, and the preservation of the semantics by the translations, also defined in Lean, is formally proved.
Validating traces of distributed systems.
Programming distributed systems is error-prone, and while formal methods such as TLA+ can help design such systems, the translation from the verified specification towards the final implementation is rarely certified. In order to bridge the gap between formal specifications of algorithms that are at the core of distributed systems and actual implementations of these systems, we developed a framework for the instrumentation of Java programs allowing one to record events and variable updates that are meaningful at the level of a given TLA+ specification. We then use the TLA+ model checker to verify that the recorded trace of an execution indeed corresponds to a possible execution of the high-level specification. Although this approach cannot formally establish the correctness of an implementation, experience with applying it to several distributed algorithms, including a production system that had been extensively validated using traditional testing techniques, indicates that it is surprisingly effective at discovering discrepancies between the TLA+ specification and the implementation. We describe the technique and discuss reasons for these discrepancies in a paper published at SEFM 2024 26. A journal version of that paper is in preparation.
Reversibility for Fault Tolerance in Typed Sessions.
Multiparty Session Types (MPST) are a family of type-based techniques developed to prevent various classes of bugs in message-passing concurrent programs, such as the absence of deadlock. MPST are also used as a form of protocol specification, where the well-typedness of a program ensures its conformance to the specification (session fidelity). However, MPST often assume a reliable model of communication, i.e. without message loss or process faults, and existing approaches to relax this assumption either augment MPST with additional elements to deal with faults (e.g. placing explicit checkpoints in the protocol specification), or lower the guarantees provided (e.g. Affine MPST, which safely stop the whole system in case of a process failure, losing session fidelity).
In this project, we aim to implement MPST on top of a reversible variant of the higher-order
Model checking for stochastic, timed or linear systems
Opacity of timed automata.
Opacity of a system is a property describing the impossibility for an external attacker to deduce some private information by observing an execution of the system. This notion is known 56 to be undecidable in the general case, as such, most works consider restricted frameworks. In 40, we considered a variety of opacity problems by weakening either the model or the attacker. Specifically, we considered limitations on the number of clocks, the diversity of observations provided by the system, and the number of observations the attacker could access, among other restrictions.
The idea of limiting the attacker’s power can be seen as an extension to an existing line of research where the attacker only has access to the entire duration of the executions. In 20 and 21 we also studied this variant of opacity. In 20, we considered timed automata with a single clock, while allowing parameters. We developed a method to compute the set of private and public durations observable by the attacker, as well as to compare those two sets when feasible. In 21, we considered systems with arbitrarily many clocks, excluded parameters, and introduced a form of control to the system. The objective was to detect when a system could be easily modified to ensure opacity.
Diagnosis of stochastic systems.
The notion of diagnosis can be seen as a dual to opacity. While opacity seeks to hide private information from an observer, diagnosis aims to provide information about specific unobservable behaviors of a system, such as failures. The study of diagnosis in stochastic systems, such as those based on Markov chains, is now well-developed. For instance, in 53, the authors demonstrate, for various forms of diagnosis, how to determine whether a system is diagnosable and, if so, how to construct a diagnoser, i.e., a tool that translates a sequence of observations into a verdict. In 32, we examined how to minimize the cost of diagnosis. This included reducing both the delay between a fault’s occurrence and its detection by the diagnoser, as well as reducing the number of sensors, and the frequency of their usage.
Modeling and verifying the Bitbus protocol.
Bitbus is a standard protocol used in master-slave networks of industrial controllers. In the context of a project aimed at reingeneering protocols based on Bitbus that are used in nuclear power plants, we model the protocol as communicating timed automata with the objective of studying some of the properties it guarantees, in particular with respect to the tolerance of intermittent faults.
Termination of linear loops.
Loops are a fundamental component of any programming language, and their study plays a crucial role in several subfields of computer science, including automated verification, abstract interpretation, program analysis, semantics, and more. Significant work in the linear dynamical systems community has focused on representing program loops as linear systems and analyzing the properties they satisfy, particularly with regard to loop termination. Since determining the termination of linear dynamical systems is undecidable in general, one common workaround involves searching for invariants that guarantee non-termination. The existence of such invariants provides a definitive answer to the termination problem, whereas their absence is inconclusive. This strategy hence circumvents the undecidability barrier. In 17, we investigated the automatic synthesis of invariants expressible in Presburger arithmetic, which are suitable for systems relying solely on integer variables.
In 34, we introduced what we coined constraint loops, where the values of the variables in the next iteration of the loop are chosen non-deterministically, subject to satisfying a set of linear inequalities. This often arises due to abstractions when modeling a complex loop in a program. Termination in this context means that every possible non-deterministic choice eventually leads to loop termination. While our results are currently limited by the number of dimensions (i.e., the number of variables within the loop), they are already technically complex. This line of research is ongoing.
Verification and Analysis of Dynamic Properties of Biological Systems
Approximate Conservation Laws.
Section 3.3 presented an example of analyzing the kinetics of a chemical reaction network across multiple timescales 58. While the reduction method described is broadly applicable, it can fail in certain cases. A primary source of failure is the degeneracy of the quasi-steady state, which occurs when the fast dynamics exhibit a continuum of steady states. This situation typically arises when the truncated fast ODEs possess first integrals, i.e., quantities conserved along any trajectory but fixed to arbitrary values depending on the initial conditions. As a result, the quasi-steady states lose hyperbolicity, as the Jacobian matrix for the fast dynamics becomes singular. To address this limitation, we have introduced the concept of approximate conservation laws, enabling further reductions.
From a technical perspective, this framework necessitates parametric adaptations of several established algorithms in Symbolic Computation. One straightforward example is the computation of the rank of a matrix with real parameters, producing a formal finite case distinction where possible ranks are paired with necessary and sufficient conditions expressed as Tarski formulas. This approach helps identify critical cases related to the aforementioned singularity in the Jacobian. Another example involves the use of comprehensive Gröbner bases in parametric computations for certain syzygy modules. A key challenge in all such algorithms is the combinatorial explosion in the number of cases.
To mitigate this issue, we employ SMT solving and real quantifier elimination techniques to detect inconsistencies and prune the case distinction tree early. However, since these decision procedures are typically double-exponential in complexity, they can become bottlenecks, particularly when the degrees of polynomial terms increase, potentially preventing termination within a reasonable timeframe. As the results remain valid even without eliminating some redundant cases, we combine multiple methods and impose suitable timeouts. This work has culminated in two articles published in the SIAM Journal on Applied Dynamical Systems 14, 15.