How do we know that our system is correct?

Hana Chockler

A negative answer from the model-checking procedure is accompanied by a counterexample - a trace demonstrating what went wrong. On the other hand, when the answer from the model-checker is positive, usually no further information is given. The issue of "suspecting the positive answer" first arose in industry, where positive answers from model-checkers often concealed serious bugs in hardware designs. In this talk, I will discuss the different reasons why the positive answer from the model-checker requires further investigation and present algorithms for such investigation, called "sanity checks" for formal verification. I will also briefly introduce the theory of causality and its uses in model-checking, specifically in the context of the subject of this talk. I will introduce the main goal (in my opinion) of the sanity checks and related algorithms, and will outline promising future directions.

The talk is based on many papers written by many people, and is not limited to my own research. The talk is reasonably self-contained.

--
Dr. Hana Chockler is a Lecturer in the Department of Informatics, King's College London. Before joining King's College in 2013, Hana worked as a Research Staff Member at IBM Research Laboratory in Haifa, in the formal verification group. Her research interests include formal verification of hardware and software, sanity checks for model checking, automatic generation of specification, explanation of counterexamples, connections between concepts in AI and formal verification, and integration of testing and formal methods, in particular, using combinatorial optimisation to find bugs