In program synthesis, an intelligent system takes in a set of user-gener...
We propose a simple yet effective and robust method for contrastive
capt...
We propose a method to fuse frozen text-only large language models (LLMs...
The BigCode community, an open-scientific collaboration working on the
r...
We propose an efficient method to ground pretrained text-only language m...
The BigCode project is an open-scientific collaboration working on the
r...
To extend the scope of coding queries to more realistic settings, we pro...
Sampling diverse programs from a code language model and reranking with ...
We demonstrate how language can improve geolocation: the task of predict...
Existing approaches built separate classifiers to detect nonsense in
dia...
People rely heavily on context to enrich meaning beyond what is literall...
Likelihood, although useful as a training loss, is a poor search objecti...
Generative models of code, pretrained on large corpora of programs, have...
Code is seldom written in a single left-to-right pass and is instead
rep...
In classic instruction following, language like "I'd like the JetBlue fl...
We present a grounded neural dialogue model that successfully collaborat...
We propose a modular architecture for following natural language instruc...
Textual representation learners trained on large amounts of data have
ac...
We apply a generative segmental model of task structure, guided by narra...
Neural parsers obtain state-of-the-art results on benchmark treebanks fo...
Vision-and-Language Navigation (VLN) requires grounding instructions, su...
We improve the informativeness of models for conditional text generation...
Dynamic oracles provide strong supervision for training constituency par...
Navigation guided by natural language instructions presents a challengin...
We extend models for both following and generating natural language
inst...
Generative neural models have recently achieved state-of-the-art results...
Recent work has proposed several generative neural models for constituen...
We describe a strategy for the acquisition of training data necessary to...
We investigate the hypothesis that word representations ought to incorpo...