Constrained Language Models Yield Few-Shot Semantic Parsers

04/18/2021
by   Richard Shin, et al.
9

We explore the use of large pretrained language models as few-shot semantic parsers. The goal in semantic parsing is to generate a structured meaning representation given a natural language input. However, language models are trained to generate natural language. To bridge the gap, we use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation. With a small amount of data and very little code to convert into English-like representations, we provide a blueprint for rapidly bootstrapping semantic parsers and demonstrate good performance on multiple tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset