Extracting Formal Specifications to Strenghten Type Behaviour Testing

08/17/2017
by   Dimitri Racordon, et al.
0

Testing has become an indispensable activity of software development, yet writing good and relevant tests remains a quite challenging task. One well-known problem is that it often is impossible or unrealistic to test for every outcome, as the input and/or output of a program component can represent incredbly large, unless infinite domains. A common approach to tackle this issue it to only test classes of cases, and to assume that those classes cover all (or at least most) of the cases a component is susceptible to be exposed to. Unfortunately, those kind of assumptions can prove wrong in many situations, causing a yet well-tested program to fail upon a particular input. In this short paper, we propose to leverage formal verification, in particular model checking techniques, as a way to better identify cases for which the aforementioned assumptions do not hold, and ultimately strenghten the confidence one can have in a test suite. The idea is to extract a formal specification of the data types of a program, in the form of a term rewriting system, and to check that specification against a set of properties specified by the programmer. Cases for which tose properties do not hold can then be identified using model checking, and selected as test cases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset