ARCH20:Editor's Preface

This volume of proceedings contains the papers presented at the 7th International Workshop on Applied veRification for Continuous and Hybrid systems (ARCH) and the results of the 4th edition of ARCH-COMP, a competition for the formal verification of continuous and hybrid systems. The workshop was held as part of the IFAC World Congress, on July 12, 2020. Due to the coronavirus pandemic, the workshop was held via video conferencing. Previous editions of the ARCH workshop series were held 2014 in Berlin, 2015 in Seattle, 2016 in Vienna, 2017 in Pittsburgh, 2018 in Oxford, and 2019 in Montreal. The goal of the ARCH workshops is to bring together people from industry with researchers and tool developers interested in applying verification techniques to continuous and hybrid systems. The workshops are accompanied by a collaborative website (cps-vo.org/group/ARCH), which features a curated collection of benchmarks, disseminates results submitted by researchers and tool developers, and provides feedback from practitioners in the form of experience reports. The benchmark repository is intended to serve as a lasting and evolving resource to the research community.

The workshop received 7 submissions, 6 of which were accepted by the program committee. Each submission was reviewed by 4 program committee members, including at least one member from academia and one from industry. 

In addition to the workshop papers, these proceedings present the results of the 4th edition of ARCH-COMP. ARCH-COMP is a friendly competition that was carried out online from April to June 2020. ARCH-COMP showcases the participating tools and serves as a testing ground to see which methods are particularly suitable to which types of problems. As a side effect, it aims at establishing a consensus for comparing different software implementations in the context of verification, as such comparisons are routinely demanded by reviewers of scientific publications. 

All participating tools were represented in the competition jury, headed by the organizers. In the problem phase of the competition, participants submitted problem instances, which were then approved by the jury by consensus. In most categories, participants submitted a code package and the performance measurements were run centrally under the supervision of Taylor T. Johnson. Participants who were not able to submit executable code carried out the performance measurements themselves, as indicated in the reports. To establish further trustworthiness of the results, the code with which the results have been obtained is available on the ARCH website. 

In this 4th edition of ARCH-COMP, 42 tools participated in the competition and 25 tools participated in the repeatability evaluation. The competition categories and participating tools were:

  • Piecewise Constant Dynamics plus BMC (lead: Lei Bu): BACH, PHAVerLite, PHAVer/SX, TROPICAL, XSpeed
  • Continuous and Hybrid Systems with Linear Dynamics (lead: Matthias Althoff): CORA, C2E2, HyDRA, Hylaa, Hylaa-Continuous, JuliaReach, SpaceEx, XSpeed
  • Nonlinear Systems (lead: Luca Geretti): Ariadne, CORA, DynIbex, Flow*, Isabelle/HOL, JuliaReach
  • Stochastic Modelling (lead: Alessandro Abate): AMYTISS, FAUST, hpnmg, HYPEG, Mascot-SDS, The Modest Toolset, ProbReach, SReachTools, StocHy, SDCPN&IPS, (e,d) Abstraction
  • Artificial Intelligence and Neural Network Control Systems (lead: Taylor T. Johnson): NNV, OVERT, ReachNN, VenMAS
  • Falsification (lead: Gidon Ernst): S-TaLiRo, Breach, FalStar, falsify, ARIsTEO, zlscheck
  • Hybrid Programs (lead: Stefan Mitsch): KeYmaera, KeYmaera X 4.6.3, KeYmaera X 4.8.0, Isabelle/HOL/Hybrid-Systems-VCs, and HHL Prover

The 2020 prize for the best result was awarded according to a vote by the attending audience to the tool JuliaReach.

The problem descriptions and the results are provided in a report for each category, drafted by the category lead together with representatives of the participating tools. Due to the diversity of problems, ARCH-COMP does not provide any ranking of tools. Nonetheless, the presented results probably provide the most complete assessment of tools for the safety verification of continuous and hybrid systems up to this date.


Goran Frehse, Matthias Althoff (Program Chairs)
Sergiy Bogomolov (Publicity Chair)
Taylor T. Johnson (Evaluation Chair)
July 12, 2020
Berlin