Self-Checking Deep Neural Networks in Deployment
The widespread adoption of Deep Neural Networks (DNNs) in important domains raises questions about the trustworthiness of DNN outputs. Even a highly accurate DNN will make mistakes some of the time, and in settings like self-driving vehicles these mistakes must be quickly detected and properly dealt with in deployment. Just as our community has developed effective techniques and mechanisms to monitor and check programmed components, we believe it is now necessary to do the same for DNNs. In this paper we present DNN self-checking as a process by which internal DNN layer features are used to check DNN predictions. We detail SelfChecker, a self-checking system that monitors DNN outputs and triggers an alarm if the internal layer features of the model are inconsistent with the final prediction. SelfChecker also provides advice in the form of an alternative prediction. We evaluated SelfChecker on four popular image datasets and three DNN models and found that SelfChecker triggers correct alarms on 60.56 2.04 work (SELFORACLE, DISSECTOR, and ConfidNet). In experiments with self-driving car scenarios, SelfChecker triggers more correct alarms than SELFORACLE for two DNN models (DAVE-2 and Chauffeur) with comparable false alarms. Our implementation is available as open source.
READ FULL TEXT