PT-Ranking: A Benchmarking Platform for Neural Learning-to-Rank

08/31/2020
by   Hai-Tao Yu, et al.
0

Deep neural networks has become the first choice for researchers working on algorithmic aspects of learning-to-rank. Unfortunately, it is not trivial to find the optimal setting of hyper-parameters that achieves the best ranking performance. As a result, it becomes more and more difficult to develop a new model and conduct a fair comparison with prior methods, especially for newcomers. In this work, we propose PT-Ranking, an open-source project based on PyTorch for developing and evaluating learning-to-rank methods using deep neural networks as the basis to construct a scoring function. On one hand, PT-Ranking includes many representative learning-to-rank methods. Besides the traditional optimization framework via empirical risk minimization, adversarial optimization framework is also integrated. Furthermore, PT-Ranking's modular design provides a set of building blocks that users can leverage to develop new ranking models. On the other hand, PT-Ranking supports to compare different learning-to-rank methods based on the widely used datasets (e.g., MSLR-WEB30K, Yahoo!LETOR and Istella LETOR) in terms of different metrics, such as precision, MAP, nDCG, nERR. By randomly masking the ground-truth labels with a specified ratio, PT-Ranking allows to examine to what extent the ratio of unlabelled query-document pairs affects the performance of different learning-to-rank methods. We further conducted a series of demo experiments to clearly show the effect of different factors on neural learning-to-rank methods, such as the activation function, the number of layers and the optimization strategy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset