Detecting Potential Local Adversarial Examples for Human-Interpretable Defense

09/07/2018
by   Xavier Renard, et al.
0

Machine learning models are increasingly used in the industry to make decisions such as credit insurance approval. Some people may be tempted to manipulate specific variables, such as the age or the salary, in order to get better chances of approval. In this ongoing work, we propose to discuss, with a first proposition, the issue of detecting a potential local adversarial example on classical tabular data by providing to a human expert the locally critical features for the classifier's decision, in order to control the provided information and avoid a fraud.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset