Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbolic Expert System

09/16/2022
by   Nathaniel Weir, et al.
14

We present an approach for systematic reasoning that produces human interpretable proof trees grounded in a factbase. Our solution resembles the style of a classic Prolog-based inference engine, where we replace handcrafted rules through a combination of neural language modeling, guided generation, and semiparametric dense retrieval. This novel reasoning engine, NELLIE, dynamically instantiates interpretable inference rules that capture and score entailment (de)compositions over natural language statements. NELLIE provides competitive performance on scientific QA datasets requiring structured explanations over multiple facts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset