Explainable AI for Pre-Trained Code Models: What Do They Learn? When They Do Not Work?
In recent years, there has been a wide interest in designing deep neural network-based models that automate downstream software engineering tasks, such as program document generation, code search, and program repair. Although the main objective of these studies is to improve the effectiveness of the downstream task, many studies only attempt to employ the next best neural network model, without a proper in-depth analysis of why a particular solution works or does not, on particular tasks or scenarios. In this paper, using an eXplainable AI (XAI) method (attention mechanism), we study state-of-the-art Transformer-based models (CodeBERT and GraphCodeBERT) on a set of software engineering downstream tasks: code document generation (CDG), code refinement (CR), and code translation (CT). We first evaluate the validity of the attention mechanism on each particular task. Then, through quantitative and qualitative studies, we identify what CodeBERT and GraphCodeBERT learn (put the highest attention on, in terms of source code token types), on these tasks. Finally, we show some of the common patterns when the model does not work as expected (perform poorly while the problem in hand is easy) and suggest recommendations that may alleviate the observed challenges.
READ FULL TEXT