Reasoning-Modulated Representations

Abstract

Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g. that observations must obey certain laws of physics) that any "tabula rasa" neural network would need to re-learn from scratch, penalising data efficiency. We incorporate this information into a pre-trained reasoning module, and investigate its role in shaping the discovered representations. We hypothesise that, by making such a module a bottleneck of our architecture, the learning process will focus on the relevant abstract features of the data, leading to more robust features. We empirically validate this hypothesis in diverse self-supervised learning settings from pixel observations, paving the way for a new class of data-efficient representation learning methods.

Publications