Improving the alignment of language models with human preferences remain...
Learning from human feedback has been shown to be effective at aligning
...
The evaluation of abstractive summarization models typically uses test d...
Conditional language models are predominantly trained with maximum likel...
Machine learning algorithms typically assume independent and identically...
While large pretrained Transformer models have proven highly capable at
...
Widely used evaluation metrics for text generation either do not work we...
Most prior work in the sequence-to-sequence paradigm focused on datasets...
Recent work pre-training Transformers with self-supervised objectives on...
Transfer learning, where a model is first pre-trained on a data-rich tas...
We propose an end-to-end neural model for zero-shot abstractive text
sum...
Discriminative neural networks offer little or no performance guarantees...
We propose a model-based metric to estimate the factual accuracy of gene...
Massively multi-label prediction/classification problems arise in
enviro...
Abstractive summarization has been studied using neural sequence transdu...
Clinicians spend a significant amount of time inputting free-form textua...
We show that generating English Wikipedia articles can be approached as ...
Predictive modeling with electronic health record (EHR) data is anticipa...
The driving force behind the recent success of LSTMs has been their abil...
Neural sequence-to-sequence models have provided a viable new approach f...
Recurrent neural network models with an attention mechanism have proven ...
Sequence to sequence models are successful tools for supervised sequence...