Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango

09/16/2022
by   Aman Madaan, et al.
2

Reasoning is a key pillar of human cognition and intelligence. In the past decade, we witnessed dramatic gains in natural language processing and unprecedented scaling of large language models. Recent work has characterized the capability of few-shot prompting techniques such as chain of thought to emulate human reasoning in large language models. This hallmark feature of few-shot prompting, combined with ever scaling language models, opened a vista of possibilities to solve various tasks, such as math word problems, code completion, and commonsense reasoning. Chain of thought (CoT) prompting further pushes the performance of models in a few-shot setup, by supplying intermediate steps and urging the model to follow the same process. Despite its compelling performance, the genesis of reasoning capability in these models is less explored. This work initiates the preliminary steps towards a deeper understanding of reasoning mechanisms in large language models. Our work centers around querying the model while controlling for all but one of the components in a prompt: symbols, patterns, and text. We then analyze the performance divergence across the queries. Our results suggest the presence of factual patterns in a prompt is not necessary for the success of CoT. Nonetheless, we empirically show that relying solely on patterns is also insufficient for high quality results. We posit that text imbues patterns with commonsense knowledge and meaning. Our exhaustive empirical analysis provides qualitative examples of the symbiotic relationship between text and patterns. Such systematic understanding of CoT enables us to devise concise chain of thought, dubbed as CCoT, where text and patterns are pruned to only retain their key roles, while delivering on par or slightly higher solve task rate.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset