Learning explanatory rules from noisy data

Suppose you are playing football. The ball arrives at your feet, and you decide to pass it to the unmarked striker. What seems like one simple action requires two different kinds of thought. 

First, you recognise that there is a football at your feet. This recognition requires intuitive perceptual thinking - you cannot easily articulate how you come to know that there is a ball at your feet, you just see that it is there. Second, you decide to pass the ball to a particular striker. This decision requires conceptual thinking. Your decision is tied to a justification - the reason you passed the ball to the striker is because she was unmarked.

The distinction is interesting to us because these two types of thinking correspond to two different approaches to machine learning: deep learning and symbolic program synthesis. Deep learning concentrates on intuitive perceptual thinking whereas symbolic program synthesis focuses on conceptual, rule-based thinking. Each system has different merits - deep learning systems are robust to noisy data but are difficult to interpret and require large amounts of data to train, whereas symbolic systems are much easier to interpret and require less training data but struggle with noisy data. While human cognition seamlessly combines these two distinct ways of thinking, it is much less clear whether or how it is possible to replicate this in a single AI system.

Our new paper, recently published in JAIR, demonstrates it is possible for systems to combine intuitive perceptual with conceptual interpretable reasoning. The system we describe, ∂ILP, is robust to noise, data-efficient, and produces interpretable rules.

fullscreen fullscreen_mobile

We demonstrate how ∂ILP works with an induction task. It is given a pair of images representing numbers, and has to output a label (0 or 1) indicating whether the number of the left image is less than the number of the right image. Solving this problem involves both kinds of thinking: you need intuitive perceptual thinking to recognise the image as a representation of a particular digit, and you need conceptual thinking to understand the less-than relation in its full generality.

fullscreen fullscreen_mobile
An example induction task.

If you give a standard deep learning model (such as a convolutional neural network with an MLP) sufficient training data, it is able to learn to solve this task effectively. Once it has been trained, you can give it a new pair of images it has never seen before, and it will classify correctly. However, it will only generalise correctly if you give it multiple examples of every pair of digits. The model is good at visual generalisation: generalising to new images, assuming it has seen every pair of digits in the test set (see the green box below). But it is not capable of symbolic generalisation: generalising to a new pair of digits it has not seen before (see the blue box below). Researchers like Gary Marcus and Joel Grus have pointed this out in recent, thought-provoking articles.

fullscreen fullscreen_mobile

∂ILP differs from standard neural nets because it is able to generalise symbolically, and it differs from standard symbolic programs because it is able to generalise visually. It learns explicit programs from examples that are readable, interpretable, and verifiable. ∂ILP is given a partial set of examples (the desired results) and produces a program that satisfies them. It searches through the space of programs using gradient descent. If the outputs of the program conflict with the desired outputs from the reference data, the system revises the program to better match the data.

fullscreen fullscreen_mobile
This figure demonstrates the ∂ILP training loop.

Our system, ∂ILP, is able to generalise symbolically. Once it has seen enough examples of x < y, y < z, x < z, it will consider the possibility that the < relation is transitive. Once it has realised this general rule, it can apply it to a new pair of numbers it has never seen before.

fullscreen fullscreen_mobile
Our less-than experiment is summarised above: the standard deep neural network (the blue curve) is unable to generalise correctly to unseen pairs of digits. By contrast, ∂ILP (the green line) is still able to achieve a low test error when it has only seen 40% of the pairs of digits; this shows it is capable of symbolic generalisation.

We believe that our system goes some way to answering the question of whether achieving symbolic generalisation in deep neural networks is possible. In future work, we plan to integrate ∂ILP-like systems into reinforcement learning agents and larger deep learning modules. In doing so, we hope to impart our systems with the ability to reason as well as to react.

Read the paper here.