The Natural Auditor: How To Tell If Someone Used Your Words To Train Their Model

11/01/2018
by   Congzheng Song, et al.
6

To help enforce data-protection regulations such as GDPR and detect unauthorized uses of personal data, we propose a new model auditing technique that enables users to check if their data was used to train a machine learning model. We focus on auditing deep-learning models that generate natural-language text, including word prediction and dialog generation. These models are at the core of many popular online services. Furthermore, they are often trained on very sensitive personal data, such as users' messages, searches, chats, and comments. We design and evaluate an effective black-box auditing method that can detect, with very few queries to a model, if a particular user's texts were used to train it (among thousands of other users). In contrast to prior work on membership inference against ML models, we do not assume that the model produces numeric confidence values. We empirically demonstrate that we can successfully audit models that are well-generalized and not overfitted to the training data. We also analyze how text-generation models memorize word sequences and explain why this memorization makes them amenable to auditing.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset