Learning Neural Causal Models with Active Interventions


Learning a directed acyclic graph (DAG) from data is a challenging inference problem and an important task in many disciplines. Due to the appealing scaling properties of neural networks, there has been a recent surge in interest in differentiable neural network based methods for learning causal structure from data. So far these settings have focused on static datasets of observational or interventional data. In this work, we introduce an intervention targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process. Our method significantly reduces the required number of interactions compared with random explorations and is applicable for both discrete and continuous optimization formulations of learning the underlying DAG from data. We examine the proposed method across a wide range of settings and demonstrate superior performance on multiple benchmarks from simulated to real-world data. \end{abstract}