Jump to Content

Learning 3D Particle-based Simulators from RGB-D Videos

Published
View publication Download

Abstract

Realistic simulation is critical for applications ranging from robotics to animation. Traditional analytic simulators sometimes struggle to capture sufficiently realistic simulation which can lead to problems including the well known "sim-to-real" gap. Learned simulators have emerged as an alternative for better capturing real-world physical dynamics, but require access to privileged ground truth physics information such as precise object geometry or particle tracks. Here we propose a method for learning simulators directly from observations. Visual Particle Dynamics (VPD) jointly learns a latent particle-based representation of 3D scenes, a neural simulator of the dynamics of that representation, and a renderer that can produce images of the scene from arbitrary views. VPD learns end to end from posed videos and does not require access to privileged information. Unlike existing 2D video prediction models, we show that 3D inductive biases enable VPD to compositionally generalize and make long-term predictions. These results pave the way for downstream applications ranging from video editing to robotic planning.

Authors

Will Whitney, Tatiana López, Tobias Pfaff, Yulia Rubanova, Thomas Kipf, Kimberly Stachenfeld, Kelsey Allen

Venue

ICLR 2024