Preprocessing Source Code Comments for Linguistic Models

08/23/2022
by   Sergey Matskevich, et al.
0

Comments are an important part of the source code and are a primary source of documentation. This has driven interest in using large bodies of comments to train or evaluate tools that consume or produce them – such as generating oracles or even code from comments, or automatically generating code summaries. Most of this work makes strong assumptions about the structure and quality of comments, such as assuming they consist mostly of proper English sentences. However, we know little about the actual quality of existing comments for these use cases. Comments often contain unique structures and elements that are not seen in other types of text, and filtering or extracting information from them requires some extra care. This paper explores the contents and quality of Python comments drawn from 840 most popular open source projects from GitHub and 8422 projects from SriLab dataset, and the impact of naïve vs. in-depth filtering can have on the use of existing comments for training and evaluation of systems that generate comments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset