Extreme Model Compression for On-device Natural Language Understanding

In this paper, we propose and experiment with techniques for extreme compression of neural natural language understanding (NLU) models, making them suitable for execution on resource-constrained devices. We propose a task-aware, end-to-end compression approach that performs word-embedding compression jointly with NLU task learning. We show our results on a large-scale, commercial NLU system trained on a varied set of intents with huge vocabulary sizes. Our approach outperforms a range of baselines and achieves a compression rate of 97.4 performance. Our analysis indicates that the signal from the downstream task is important for effective compression with minimal degradation in performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset