Group Equivariant Subsampling


Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions, to reduce the spatial size of feature maps and to allow the receptive fields to grow with depth. However, it is known that such subsampling operations are not translation equivariant, unlike convolutions that \emph{are} translation equivariant. In this work, we first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs. Then we generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling. We use these layers to construct group equivariant autoencoders (G-AEs) that allow us to learn low-dimensional equivariant representations. We verify through experiments on images that the representations are indeed equivariant to input translations and rotations, and thus generalise well to unseen positions and orientations. We further use G-AEs in models that learn object-centric representations on multi-object datasets, and show improved data efficiency and decomposition compared to non-equivariant baselines.