CAMeMBERT: Cascading Assistant-Mediated Multilingual BERT

12/22/2022
by   Dan DeGenaro, et al.
0

Large language models having hundreds of millions, and even billions, of parameters have performed extremely well on a variety of natural language processing (NLP) tasks. Their widespread use and adoption, however, is hindered by the lack of availability and portability of sufficiently large computational resources. This paper proposes a knowledge distillation (KD) technique building on the work of LightMBERT, a student model of multilingual BERT (mBERT). By repeatedly distilling mBERT through increasingly compressed toplayer distilled teacher assistant networks, CAMeMBERT aims to improve upon the time and space complexities of mBERT while keeping loss of accuracy beneath an acceptable threshold. At present, CAMeMBERT has an average accuracy of around 60.1 is subject to change after future improvements to the hyperparameters used in fine-tuning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset