Learning First-Order Symbolic Planning Representations from Plain Graphs

09/12/2019
by   Blai Bonet, et al.
0

One of the main obstacles for developing flexible AI system is the split between data-based learners and model-based solvers. Solvers such as classical planners are very flexible and can deal with a variety of problem instances and goals but require first-order symbolic models. Data-based learners, on the other hand, are robust but do not produce such representations. In this work we address this split by showing how the first-order symbolic representations that are used by planners can be learned from non-symbolic representations alone given by a number of observed system trajectories organized as graphs. The observations can be arbitrary, including raw images. What it is required is that two observations are different iff they proceed from different states. The representation learning problem is formulated as the problem of inferring the simplest planning instances over a common first-order domain that can generate the structures of the observed graphs. A slightly richer version of the problem is also considered where actions are also observed and the graphs are labeled. The problem is expressed and solved via a SAT formulation that is shown to produce first-order representations for domains like Gripper, Blocks, and Hanoi. The work suggests that the target symbolic representations for planning encode the structure of the observed state space, not the observations themselves, as assumed in deep learning approaches.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset