POS-14:Papers with Abstracts

Papers
Abstract. Over the years, parallel SAT solving becomes more and more important. However, most of state-of-the-art parallel SAT solvers are portfolio-based ones. They aim at running several times the same solver with different parameters. In this paper, we propose a tool called Dolius, mainly based on the divide and conquer paradigm. In contrast to most current parallel efficient engines, Dolius does not need shared memory, can be distributed, and scales well when a large number of computing units is available. Furthermore, our tool contains an API allowing to plug any SAT solver in a simple way.
Abstract. As satisfiability (SAT) solver performance has improved, so has their complexity, which make it more likely that SAT solvers contain bugs. One important source of increased complexity is clause sharing in parallel SAT solvers. SAT solvers can emit a proof of unsatisfiability to gain confidence that the their results are correct. Such proofs must contain deletion information in order to check them efficiently. Computing deletion information is easy and cheap for parallel solvers without clause sharing, but tricky for parallel solvers with clause sharing.

We present a method to generate unsatisfiability proofs from clause sharing parallel SAT solvers. We show that the overhead of our method is small and that the produced proofs can be validated in a time similar to the solving (CPU) time. However, proofs produced by parallel solvers without clause sharing can be checked in a similar to the solving (wall-clock) time. This raises the question whether our method can be improved such that the checking time of proofs from parallel solvers without clause sharing is comparable to the time to check proofs from parallel solver with clause sharing.
Abstract. Conflict-Driven Clause Learning algorithms are well known from an engineer
point of view. Thanks to Minisat, their designs are well understood, and most
of their implementations follow the same ideas, with essentially the same
components. Same heuristics, fast restarts, same learning mechanism.
However, their efficiency has an important drawback: they are more and more
like complex systems and harder and harder to handle. Unfortunately, only a
few works are focusing on understanding them rather than improving them. In
most of the cases, their studies are often based on a generate and test
pattern: An idea is added to an existing solver and if it improves its
efficiency the idea is published and kept. In this paper, we analyse
``post-mortem'' the proofs given by one typical CDCL solver,
Glucose. The originality of our approach is that we only consider it as a
resolution proofs builder, and then we analyze some of the proof
characteristics on a set of selected unsatisfiable instances, by shuffling each of
them 200 times. We particularly focus on trying to characterize useless and
useful clauses in the proof as well as proofs shapes. We also show that
despite their incredible efficiency, roughly 90% of the time spent in a
CDCL is useless for producing the final proof.
Abstract. Dividing a Boolean formula into smaller independent sub-formulae can be a useful technique for accelerating the solution of Boolean problems, including SAT and #SAT. Nevertheless, and despite promising early results, formula partitioning is hardly used in state-of-the-art solvers. In this paper, we show that this is rooted in a lack of consistency of the usefulness of formula partitioning techniques. In particular, we evaluate two existing and a novel partitioning model, coupled with two existing and two novel partitioning algorithms, on a wide range of benchmark instances. Our results show that there is no one-size-fits-all solution: for different formula types, different partitioning models and algorithms are the most suitable. While these results might seem negative, they help to improve our understanding about formula partitioning; moreover, the findings also give some guidance as to which method to use for what kinds of formulae.
Abstract. In this paper we first present three new features for classifying CNF formulas. These features are based on the structural information of the formula and consider AND-gates as well as exactly-one constraints. Next, we use these features to construct a machine learning approach to select a SAT solver configuration for CNF formulas with random decision forests. Based on this classification task we can show that our new features are useful compared to existing features. Since the computation time for these features is small, the constructed classifier improves the performance of the SAT solvers on application and hand crafted benchmarks. On the other hand, the comparison shows that the set of new features also results in a better classification.
Abstract. The aim of this paper is to gather insight into typical-case complexity of the Boolean Satisfiability (SAT) problem by mining the data from the SAT competitions. Specifically, the statistical properties of the SAT benchmarks and their impact on complexity are investigated, as well as connections between different metrics of complexity. While some of the investigated properties and relationships are "folklore" in the SAT community, this study aims at scientifically showing what is true from the folklore and what is not.
Abstract. One of the design principles of the state-of-the-art SAT solver Lingeling is to use as compact data structures as possible. These reduce memory usage, increase cache efficiency and thus improve run-time, particularly, when using multiple solver instances on multi-core machines, as in our parallel portfolio solver Plingeling and our cube and conquer solver Treengeling. The scheduler of a dozen inprocessing algorithms is an important aspect of Lingeling as well. In this talk we explain these design and implementation aspects of Lingeling and discuss new direction of solver design.
Abstract. Modern propositional satisfiability (or SAT) solvers are very powerful due to recent developments on the underlying data structures, the used heuristics to guide the search, the deduction techniques to infer knowledge, and the formula simplification techniques that are used during pre- and inprocessing. However, when all these techniques are put together, the soundness of the combined algorithm is not guaranteed any more, and understanding the complex dependencies becomes non-trivial.
In this paper we present a small set of rules that allows to model modern SAT solvers in terms of a state transition system. With these rules all techniques which are applied in modern SAT solvers can be modeled adequately. Furthermore, we show that this set of rules results is sound, complete and confluent. Finnaly, we compare the proposed transition system to related systems, and show how widely used solving techniques can be modeled.
Abstract. Dependency Quantified Boolean Formulas (DQBF) are obtained by adding Henkin quantifiers to Boolean formulas and have seen growing interest in the last years. Since deciding DQBF is NEXPTIME-complete, efficient ways of solving it would have many practical applications. Still, there is only few work on solving this kind of formulas in practice. In this paper, we present an instantiation-based technique to solve DQBF efficiently. Apart from providing a theoretical foundation, we also propose a concrete implementation of our algorithm. Finally, we give a detailed experimental analysis evaluating our prototype iDQ on several DQBF as well as QBF benchmarks.