LPAR 2023:Papers with Abstracts

Papers
Abstract. This talk discusses current extensions on the theory algebra from the NASA/PVSlibrary on formal developments for the Prototype Verification System (PVS). It discusses the approach to formalizing theorems of the ring theory and how they are applied to infer properties of specific algebraic structures. As cases of study, we will present recent formalizations on the theories of Euclidean Domains and Quaternions. Moreover, we will show how a general verification of Euclid’s division algorithm can be specialized to verify this algorithm for specific Euclidean Domains, and how the abstract theory of Quaternions can be parameterized to deal with the structure of Hamilton’s Quaternions.
Abstract. A well-known challenge in leveraging automatic theorem provers, such as satisfiability modulo theories (SMT) solvers, to discharge proof obligations from interactive theorem provers (ITPs) is determining which axioms to send to the solver together with the con- jecture to be proven. Too many axioms may confuse or clog the solver, while too few may make a theorem unprovable. When a solver fails to prove a conjecture, it is unclear to the user which case transpired. In this paper we enhance SMTCoq — an integration between the Coq ITP and the cvc5 SMT solver — with a tactic called abduce aimed at mitigating the uncertainty above. When the solver fails to prove the goal, the user may invoke abduce which will use abductive reasoning to provide facts that will allow the solver to prove the goal, if any.
Abstract. Hyperproperties are commonly used to define information-flow policies and other re- quirements that reason about the relationship between multiple traces in a system. We consider HyperQPTL – a temporal logic for hyperproperties that combines explicit quan- tification over traces with propositional quantification as, e.g., found in quantified proposi- tional temporal logic (QPTL). HyperQPTL therefore truly captures ω-regular relations on multiple traces within a system. As such, HyperQPTL can, e.g., express promptness prop- erties, which state that there exists a common bound on the number of steps up to which an event must have happened. While HyperQPTL has been studied and used in various prior works, thus far, no model-checking tool for it exists. This paper presents AutoHyperQ, a fully-automatic automata-based model checker for HyperQPTL that can cope with arbitrary combinations of trace and propositional quantification. We evaluate AutoHyperQ on a range of benchmarks and, e.g., use it to analyze promptness requirements in a diverse collection of reactive systems. Moreover, we demonstrate that the core of AutoHyperQ can be reused as an effective tool to translate QPTL formulas into ω-automata.
Abstract. Automated reasoning with theories and quantifiers is a common demand in formal methods. A major challenge that arises in this respect comes with rewriting/simplifying terms that are equal with respect to a background first-order theory T , as equality reasoning in this context requires unification modulo T . We introduce a refined algorithm for unification with abstraction in T , allowing for a fine-grained control of equality constraints and substitutions introduced by standard unification with abstraction approaches. We experimentally show the benefit of our approach within first-order linear rational arithmetic.
Abstract. The family of SCL (Clause Learning from Simple Models) calculi learns clauses with respect to a partial model assumption, similar to CDCL (Conflict Driven Clause Learning). The partial model always consists of ground first-order literals and is built by decisions and propagations. In contrast to propositional logic where propagation chains are always finite, in first-order logic they can become infinite. Therefore, the SCL family does not require exhaustive propagation and the size of the partial model is always finitely bounded. Any partial model not leading to a conflict constitutes a model for the respective finitely bounded ground clause set. We show that all potential partial models can be explored as part of the SCL calculus for first-order logic without equality and that any overall model is an extension of a partial model considered. Furthermore, SCL turns into a semi-decision procedure for first-order logic by extending the finite bound for any partial model not leading to a conflict.
Abstract. Contracts specifying a procedure’s behavior in terms of pre- and postconditions are essential for scalable software verification, but cannot express any constraints on the events occurring during execution of the procedure. This necessitates to annotate code with intermediate assertions, preventing full specification abstraction.
We propose a logic over symbolic traces able to specify recursive procedures in a mod- ular manner that refers to specified programs only in terms of events. We also provide a deduction system based on symbolic execution and induction that we prove to be sound relative to a trace semantics.
Our work generalizes contract-based to trace-based deductive verification by extending the notion of state-based contracts to trace-based contracts.
Abstract. Clause selection plays a crucial role in modern saturation-based automatic theorem provers. A commonly used heuristic suggests prioritizing small clauses, i.e., clauses with few symbol occurrences. More generally, we can give preference to clauses with a low weighted symbol occurrence count, where each symbol’s occurrence count is multiplied by a respective symbol weight. Traditionally, a human domain expert would supply the symbol weights.
In this paper, we propose a system based on a graph neural network that learns to predict symbol weights with the aim of improving clause selection for arbitrary first-order logic problems. Our experiments demonstrate that by advising the automatic theorem prover Vampire on the first-order fragment of TPTP using a trained neural network, the prover’s problem solving capability improves by 6.6% compared to uniformly weighting symbols and by 2.1% compared to a goal-directed variant of the uniformly weighting strategy.
Abstract. In this work we extend an instantiation-based theorem prover iProver with machine learning (ML) guidance based on graph neural networks. For this we implement an interactive mode in iProver, which allows communication with an external agent via network sockets. The external (ML-based) agent guides the proof search by scoring generated clauses in the given clause loop. Our evaluation on a large set of Mizar problems shows that the ML guidance outperforms iProver’s standard human-programmed priority queues, solving more than twice as many problems in the same time. To our knowledge, this is the first time the performance of a state-of-the-art instantiation-based system is doubled by ML guidance.
Abstract. Neural networks have become critical components of reactive systems in various do- mains within computer science. Despite their excellent performance, using neural networks entails numerous risks that stem from our lack of ability to understand and reason about their behavior. Due to these risks, various formal methods have been proposed for verify- ing neural networks; but unfortunately, these typically struggle with scalability barriers. Recent attempts have demonstrated that abstraction-refinement approaches could play a significant role in mitigating these limitations; but these approaches can often produce net- works that are so abstract, that they become unsuitable for verification. To deal with this issue, we present CEGARETTE, a novel verification mechanism where both the system and the property are abstracted and refined simultaneously. We observe that this approach allows us to produce abstract networks which are both small and sufficiently accurate, allowing for quick verification times while avoiding a large number of refinement steps. For evaluation purposes, we implemented CEGARETTE as an extension to the recently proposed CEGAR-NN framework. Our results are highly promising, and demonstrate a significant improvement in performance over multiple benchmarks.
Abstract. Choreographic programming is a paradigm where programmers write global descrip- tions of distributed protocols, called choreographies, and correct implementations are au- tomatically generated by a mechanism called projection. Not all choreographies are pro- jectable, because decisions made by one process must be communicated to other processes whose behaviour depends on them – a property known as knowledge of choice.
The standard formulation of knowledge of choice disallows protocols such as third-party authentication with retries, where two processes iteratively interact, and other processes wait to be notified at the end of this loop. In this work we show how knowledge of choice can be weakened, extending the class of projectable choreographies with these and other interesting behaviours. The whole development is formalised in Coq. Working with a proof assistant was crucial to our development, because of the help it provided with detecting counterintuitive edge cases that would otherwise have gone unnoticed.
Abstract. Conditioning plays an important role in revising uncertain information in light of new evidence. This work focuses on the study of Fagin and Halpern (FH-)conditioning in the context where uncertain information is represented by weighted or possibilistic belief bases. Weighted belief bases are extensions of classical logic belief bases where a weight or degree of belief is associated with each propositional logic formula. This paper proposes a characterization of a syntactic computation of the revision of weighted belief bases (in the light of new information) which is in full agreement with the semantics of the FH- conditioning of possibilistic distributions. We show that the size of the revised belief base is linear with respect to the size of the initial base and that the computational complexity amounts to performing O(log2(n)) calls to the propositional logic satisfiability tests, where n is the number of different degrees of certainty used in the initial belief base.
Abstract. Lewis’ theory of counterfactuals is the foundation of many contemporary notions of causality. In this paper, we extend this theory in the temporal direction to enable symbolic counterfactual reasoning on infinite sequences, such as counterexamples found by a model checker and trajectories produced by a reinforcement learning agent. In particular, our extension considers a more relaxed notion of similarity between worlds and proposes two additional counterfactual operators that close a semantic gap between the previous two in this more general setting. Further, we consider versions of counterfactuals that minimize the distance to the witnessing counterfactual worlds, a common requirement in causal analysis. To automate counterfactual reasoning in the temporal domain, we introduce a logic that combines temporal and counterfactual operators, and outline decision procedures for the satisfiability and trace-checking problems of this logic.
Abstract. With respect to the number of variables the border of decidability lies between 2 and 3: the two-variable fragment of first-order logic, FO2, has an exponential model property and hence NExpTime-complete satisfiability problem, while for the three-variable fragment, FO3, satisfiability is undecidable. In this paper we propose a rich subfragment of FO3, containing full FO2 (without equality), and show that it retains the finite model property and NExpTime complexity. Our fragment is obtained as an extension of the uniform one-dimensional variant of FO3.
Abstract. We present a benchmark of 29687 problems derived from the On-Line Encyclopedia of Integer Sequences (OEIS). Each problem expresses the equivalence of two syntactically different programs generating the same OEIS sequence. Such programs were conjectured by a learning-guided synthesis system using a language with looping operators. The operators implement recursion, and thus many of the proofs require induction on natural numbers. The benchmark contains problems of varying difficulty from a wide area of mathematical domains. We believe that these characteristics will make it an effective judge for the progress of inductive theorem provers in this domain for years to come.
Abstract. Non-linear polynomial systems over finite fields are used to model functional behavior of cryptosystems, with applications in system security, computer cryptography, and post- quantum cryptography. Solving polynomial systems is also one of the most difficult problems in mathematics. In this paper, we propose an automated reasoning procedure for deciding the satisfiability of a system of non-linear equations over finite fields. We introduce zero decomposition techniques to prove that polynomial constraints over finite fields yield finite basis explanation functions. We use these explanation functions in model constructing satisfiability solving, allowing us to equip a CDCL-style search procedure with tailored theory reasoning in SMT solving over finite fields. We implemented our approach and provide a novel and effective reasoning prototype for non-linear arithmetic over finite fields.
Abstract. The need to solve non-linear arithmetic constraints presents a major obstacle to the automatic verification of smart contracts. In this case study we focus on the two overapproximation techniques used by the industry verification tool Certora Prover: overapproximation of non-linear integer arithmetic using linear integer arithmetic and using non-linear real arithmetic. We compare the performance of contemporary SMT solvers on verification conditions produced by the Certora Prover using these two approximations against the natural non-linear integer arithmetic encoding. Our evaluation shows that the use of the overapproximation methods leads to solving a significant number of new problems.
Abstract. The quantification of system reliability is fundamental to the assessment of a system’s safety and resilience, and has been of interest to decision-makers. Since quantifying the system reliability is shown to be computationally intractable, researchers aim to find approximations. Existing approaches to approximate reliability either suffer from poor scalability or lack of correctness guarantees. Answer Set Programming (ASP) is a powerful tool for knowledge representation that can specify complex combinatorial problems. In recent years, the new applications of ASP have propelled the emergence of well-engineered ASP systems. This paper proposes a new ASP counting based framework, RelNet-ASP, to approximate or estimate the reliability of a system or network. The framework reduces the problem of reliability estimation to an approximate model counting problem on ASP programs, offering formal guarantees of the estimated reliability. The experimental evaluation demonstrates that RelNet-ASP outperforms state-of-the-art techniques in terms of both runtime performance and accuracy.
Abstract. Inductive invariant inference is the fundamental problem in program verification, and specifically in verification of functional programs that use nonlinear recursion and algebraic data types (ADTs). For ADTs, it is challenging to come up with an abstract domain that is rich enough to represent program properties and a procedure for invariant inference which is effective for this domain. Although there are various techniques for different abstract domains for ADTs, they often diverge while analyzing real-life programs because of low expressivity of their abstract domains. Moreover, it is often unclear if they could comple- ment each other, other than by running in a portfolio. We present a lightweight approach to combining any existing techniques for different abstract domains collaboratively, thus targeting a more expressive domain. We instantiate the approach and obtain an effective inductive invariant inference algorithm in a rich combined domain of elementary and reg- ular ADT invariants essentially for free. Because of the richer domain, collaborations of verifiers are capable of solving problems that are beyond the capabilities of the collabora- tors running independently. Our implementation of the algorithm is a collaboration of two existing state-of-the-art inductive invariant inference engines having general-purpose first- order logic solvers as a backend. Finally, we show that our implementation is capable of solving a large amount of CHC-Comp 2022 problems obtained from Haskell verification problems, for which the existing tools diverge.
Abstract. Unit propagation and conflict analysis are two essential ingredients of CDCL SAT Solving. The order in which unit propagation is computed does not matter when no conflict is found, because it is well known that there exists a unique unit-propagation fixpoint. However, when a conflict is found, current CDCL implementations stop and analyze that concrete conflict, even though other conflicts may exist in the unit-propagation closure. In this experimental evaluation, we report on our experience in modifying this concrete aspect in the CaDiCaL SAT Solver and try to answer the question of whether we can improve the performance of SAT Solvers by the analysis of multiple conflicts.
Abstract. We propose infinite model finding as a new task for SMT-Solving. Model finding has a long-standing tradition in SMT and automated reasoning in general. Yet, most of the current tools are limited to finite models despite the fact that many theories only admit infinite models. This paper shows a variety of such problems and evaluates synthesis approaches on them. Interestingly, state-of-the-art SMT solvers fail even on very small and simple problems. We target such problems by SyGuS tools as well as heuristic approaches.
Abstract. The famous double negation translation [16, 17] establishes an embedding of classical into intuitionistic logic. Curiously, the reverse direction has not been covered in literature. Utilizing a normal form for intuitionistic logic [20], we establish a small model property for intuitionistic propositional logic. We use this property for a direct encoding of the Kripke semantics into classical propositional logic and quantified Boolean formulae. Next, we transfer the developed techniques to the first order case and provide an embedding of intuitionistic first-order logic into classical first-order-logic. Our goal here is an encoding that facilitates the use of state-of-the-art provers for classical first-order logic for deter- mining intuitionistic validity. In an experimental evaluation, we show that our approach can compete with state-of-the-art provers for certain classes of benchmarks, in particular when the intuitionistic content is low. We further note that our constructions support the transfer of counter-models to validity, which is a desired feature in model checking applications.
Abstract. Motivated by satisfiability of constraints with function symbols, we consider numerical inequalities on non-negative integers. The constraints we address are a conjunction of a linear system Ax = b and an arbitrary number of (reverse) convex constraints of the form xi ≥ xdj (xi ≤ xdj ). We show that the satisfiability of these constraints is NP-complete even if the solution to the linear part is given explicitly. As a consequence, we obtain NP- completeness for an extension of certain quantifier-free constraints on sets with cardinalities and function images.
Abstract. This paper describes a new format for representing Tarskian-style interpretations for formulae in typed first-order logic, using the TPTP TF0 language. It further describes a technique and an implemented tool for verifying models using this representation, and a tool for visualizing interpretations. The research contributes to the advancement of au- tomated reasoning technology for model finding, which has several applications, including verification.
Abstract. Radio 2-colorings of graphs are a generalization of vertex colorings motivated by the problem of assigning frequency channels in radio networks. In a radio 2-coloring of a graph, vertices are assigned integer colors so that the color of two vertices u and v differ by at least 2 if u and v are neighbors, and by at least 1 if u and v have a common neighbor. Our work improves the best-known bounds for optimal radio 2-colorings of small hypercube graphs, a combinatorial problem that has received significant attention in the past. We do so by using automated reasoning techniques such as symmetry breaking and Cube and Conquer, obtaining that for n = 7 and n = 8, the coding-theory upper bounds of Whittlesey et al. (1995) are not tight. Moreover, we prove the answer for n = 7 to be either 12 or 13, thus making a substantial step towards answering an open problem by Knuth (2015). Finally, we include several combinatorial observations that might be useful for further progress, while also arguing that fully determining the answer for n = 7 will require new techniques.
Abstract. We introduce a language-parametric calculus for k-safety verification - Cartesian Reach- ability logic (CRL).
In recent years, formal verification of hyperproperties has become an important topic in the formal methods community. An interesting class of hyperproperties is known as k-safety properties, which express the absence of a bad k-tuple of execution traces. Many security policies, such as noninterference, and functional properties, such as commutativity, monotonicity, and transitivity, are k-safety properties. A prominent example of a logic that can reason about k-safety properties of software systems is Cartesian Hoare logic (CHL). However, CHL targets a specific, small imperative language. In order to use it for sound verification of programs in a different language, one needs to extend it with the desired features or hand-craft a translation. Both these approaches require a lot of tedious, error- prone work.
Unlike CHL, CRL is language-parametric: it can be instantiated with an operational semantics (of a certain kind) of any deterministic language. Its soundness theorem is proved once and for all, with no need to adapt or re-prove it for different languages or their variants. This approach can significantly reduce the development costs of tools and techniques for sound k-safety verification of programs in deterministic languages: for exam- ple, of smart contracts written for EVM (the language powering the Ethereum blockchain), which already has an operational semantics serving as a reference.
Abstract. Inference and prediction of routes have become of interest over the past decade owing to a dramatic increase in package delivery and ride-sharing services. Given the underlying combinatorial structure and the incorporation of probabilities, route prediction involves techniques from both formal methods and machine learning. One promising approach for predicting routes uses decision diagrams that are augmented with probability values. However, the effectiveness of this approach depends on the size of the compiled decision diagrams. The scalability of the approach is limited owing to its empirical runtime and space complexity. In this work, our contributions are two-fold: first, we introduce a relaxed encoding that uses a linear number of variables with respect to the number of vertices in a road network graph to significantly reduce the size of resultant decision diagrams. Secondly, instead of a stepwise sampling procedure, we propose a single pass sampling-based route prediction. In our evaluations arising from a real-world road network, we demonstrate that the resulting system achieves around twice the quality of suggested routes while being an order of magnitude faster compared to state-of-the-art.
Abstract. Differentiable logics (DL) have recently been proposed as a method of training neural networks to satisfy logical specifications. A DL consists of a syntax in which specifications are stated and an interpretation function that translates expressions in the syntax into loss functions. These loss functions can then be used during training with standard gradient descent algorithms. The variety of existing DLs and the differing levels of formality with which they are treated makes a systematic comparative study of their properties and implementations difficult. This paper remedies this problem by suggesting a meta-language for defining DLs that we call the Logic of Differentiable Logics, or LDL. Syntactically, it generalises the syntax of existing DLs to FOL, and for the first time introduces the formalism for reasoning about vectors and learners. Semantically, it introduces a general interpretation function that can be instantiated to define loss functions arising from different existing DLs. We use LDL to establish several theoretical properties of existing DLs and to conduct their empirical study in neural network verification.