Generalized Doubly-Reparameterized Gradient Estimators

Abstract

Accurate and low-variance gradient estimation is an important prerequisite to train generative probabilistic models. For example, variational autoencoders (VAEs) only became competitive because of reparameterization. Here, we generalize the recently proposed doubly-reparameterized gradients (DReGs) estimator for Monte Carlo objectives in two ways and show that they can be used to train conditional and hierarchical VAEs more effectively on image modelling tasks. First, we generalize DReGs to general score function gradients instead of just those of the sampling distribution; second, we extend the estimators to hierarchical models with several stochastic layers. In the latter case, distribution parameters in subsequent layers depend on stochastic variables in previous layers, such that seemingly pathwise gradients can give rise to additional score function gradients. We show that our estimators can still reduce gradient variance in these cases by systematically applying double reparameterization to some of these score function gradients.

Publications