CCTEST: Testing and Repairing Code Completion Systems

08/17/2022
by   Zongjie Li, et al.
0

Code completion, a highly valuable topic in the software development domain, has been increasingly promoted for use by recent advances in large language models (LLMs). To date, visible LLM-based code completion frameworks like GitHub Copilot and GPT are trained using deep learning over vast quantities of unstructured text and open source codes. As the paramount component and the cornerstone in daily programming tasks, code completion has largely boosted professionals' efficiency in building real-world software systems. In contrast to this flourishing market, we find that code completion models often output suspicious results, and to date, an automated testing and enhancement framework for code completion models is not available. This research proposes CCTEST, a framework to test and repair code completion systems in blackbox settings. CCTEST features a novel mutation strategy, namely program structure-consistency (PSC) mutations, to generate mutated code completion inputs. Then, it detects inconsistent outputs, representing likely erroneous cases, from all the completed code cases. Moreover, CCTEST repairs the code completion outputs by selecting the output that mostly reflects the "average" appearance of all output cases, as the final output of the code completion systems. We detected a total of 33,540 inputs that can trigger likely erroneous cases from eight popular LLM-based code completion systems. With repairing, we show that the performance of code completion models notably increased by 53.51

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset