It is becoming increasingly important to verify safety and security of AI applications. While declarative languages (of the kind found in automated planners and model checkers) are traditionally used for verifying AI systems, a big challenge is to design methods that generate verified executable programs. A good example of such a ``verification to implementation" cycle is given by automated planning languages like PDDL, where plans are found via a model searching in a declarative language, but then interpreted or compiled into executable code in an imperative language. In this talk, I will show that this method can itself be verified. I will present a formal framework and a prototype Agda implementation that represent PDDL plans as executable functions that inhabit types that are given by formulae describing planning problems. By exploiting the well-known Curry-Howard correspondence, type-checking then automatically ensures that the generated program corresponds precisely to the specification of the planning problem.
Joint work with: Christopher Schwaab, Alasdair Hill,Frantisek Farka, Ronald P. A. Petrick, Joe Wells, and Kevin Hammond.
This talk will follow the research paper presented at PADL'19: http://www.macs.hw.ac.uk/~ek19/pddl-verification.pdf.