Reinforcement learning from human feedback (RLHF) can improve the qualit...
Reliable automatic evaluation of summarization systems is challenging du...
Evaluation metrics that are not robust to dialect variation make it
impo...
In this paper we share findings from our effort to build practical machi...
Achieving universal translation between all human language pairs is the
...
In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence
...
Recently, mT5 - a massively multilingual version of T5 - leveraged a uni...
Machine learning has brought striking advances in multilingual natural
l...
Large pre-trained multilingual models like mBERT, XLM-R achieve state of...
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified
...
Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) a...
Unsupervised translation has reached impressive performance on resource-...
Over the last few years two promising research directions in low-resourc...
Much recent progress in applications of machine learning models to NLP h...
The recently proposed massively multilingual neural machine translation ...
Pre-trained word embeddings are the primary method for transfer learning...
There have been significant innovations in media technologies in the rec...
User interaction with voice-powered agents generates large amounts of
un...
Several recent papers investigate Active Learning (AL) for mitigating th...
The problem of automatic accent identification is important for several
...