Automated Customized Bug-Benchmark Generation

01/09/2019
by   Vineeth Kashyap, et al.
0

We introduce Bug-Injector, a system that automatically creates benchmarks for customized evaluation of static analysis tools. We share a benchmark generated using Bug-Injector and illustrate its efficacy by using it to evaluate the recall of leading open-source static analysis tools. Bug-Injector works by inserting bugs based on bug templates into real-world host programs. It searches dynamic program traces of the host program for points where the dynamic state satisfies a bug template's preconditions and modifies related host program's code to inject a bug based on the template. Every generated test case is accompanied by a program input whose trace has been shown to contain a dynamically-observed state triggering the injected bug. This approach allows us to generate on-demand test suites that meet a broad range of requirements and desiderata for bug benchmarks that we have identified. It also allows us to create customized benchmarks suitable for evaluating tools for a specific use case (i.e., a given codebase and bug types). Our experimental evaluation demonstrates the suitability of our generated test suites for evaluating static bug-detection tools and for comparing the performance of multiple tools.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro