Into the unknown: Active monitoring of neural networks

09/14/2020
by   Anna Lukina, et al.
0

Machine-learning techniques achieve excellent performance in modern applications. In particular, neural networks enable training classifiers, often used in safety-critical applications, to complete a variety of tasks without human supervision. Neural-network models have neither the means to identify what they do not know nor to interact with the human user before making a decision. When deployed in the real world, such models work reliably in scenarios they have seen during training. In unfamiliar situations, however, they can exhibit unpredictable behavior compromising safety of the whole system. We propose an algorithmic framework for active monitoring of neural-network classifiers that allows for their deployment in dynamic environments where unknown input classes appear frequently. Based on quantitative monitoring of the feature layer, we detect novel inputs and ask an authority for labels, thus enabling us to adapt to these novel classes. A neural network wrapped in our framework achieves higher classification accuracy on unknown input classes over time compared to the original standalone model. The typical approach to adapt to unknown input classes is to retrain the neural-network classifier on an augmented training dataset. However, the system is vulnerable before such a dataset is available. Owing to the underlying monitor, we adapt the framework to novel inputs incrementally, thereby improving short-term reliability of the classification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset