How Researchers Could Obtain Quick and Cheap User Feedback on their Algorithms Without Having to Operate their Own Recommender System

12/14/2022
by   Tobias Eichinger, et al.
0

The majority of recommendation algorithms are evaluated on the basis of historic benchmark datasets. Evaluation on historic benchmark datasets is quick and cheap to conduct, yet excludes the viewpoint of users who actually consume recommendations. User feedback is seldom collected, since it requires access to an operational recommender system. Establishing and maintaining an operational recommender system imposes a timely and financial burden that a majority of researchers cannot shoulder. We aim to reduce this burden in order to promote widespread user-centric evaluations of recommendation algorithms, in particular for novice researchers in the field. We present work in progress on an evaluation tool that implements a novel paradigm that enables user-centric evaluations of recommendation algorithms without access to an operational recommender system. Finally, we sketch the experiments we plan to conduct with the help of the evaluation tool.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset