Boosting Commit Classification with Contrastive Learning

08/16/2023
by   Jiajun Tong, et al.
0

Commit Classification (CC) is an important task in software maintenance, which helps software developers classify code changes into different types according to their nature and purpose. It allows developers to understand better how their development efforts are progressing, identify areas where they need improvement, and make informed decisions about when and how to release new software versions. However, existing models need lots of manually labeled data for fine-tuning processes, and ignore sentence-level semantic information, which is often essential for discovering the difference between diverse commits. Therefore, it is still challenging to solve CC in fewshot scenario. To solve the above problems, we propose a contrastive learning-based commit classification framework. Firstly, we generate K sentences and pseudo-labels according to the labels of the dataset, which aims to enhance the dataset. Secondly, we randomly group the augmented data N times to compare their similarity with the positive T_p^|C| and negative T_n^|C| samples. We utilize individual pretrained sentence transformers (ST)s to efficiently obtain the sentence-level embeddings from different features respectively. Finally, we adopt the cosine similarity function to limit the distribution of vectors, similar vectors are more adjacent. The light fine-tuned model is then applied to the label prediction of incoming commits. Extensive experiments on two open available datasets demonstrate that our framework can solve the CC problem simply but effectively in fewshot scenarios, while achieving state-of-the-art(SOTA) performance and improving the adaptability of the model without requiring a large number of training samples for fine-tuning. The code, data, and trained models are available at https://github.com/AppleMax1992/CommitFit.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset