Interpretable Machine Learning for Privacy-Preserving Pervasive Systems
The presence of pervasive systems in our everyday lives and the interaction of users with connected devices such as smartphones or home appliances generate increasing amounts of traces that reflect users' behavior. A plethora of machine learning techniques enable service providers to process these traces to extract latent information about the users. While most of the existing projects have focused on the accuracy of these techniques, little work has been done on the interpretation of the inference and identification algorithms based on them. In this paper, we propose a machine learning interpretability framework for inference algorithms based on data collected through pervasive systems and we outline the open challenges in this research area. Our interpretability framework enable users to understand how the traces they generate could expose their privacy, while allowing for usable and personalized services at the same time.
READ FULL TEXT