Smoothness in neural network approximators: the good, the bad, the ugly


Today, we have many ways to achieve smoothness in NNs, some implicit and some explicit, including: relying on the smoothness of neural network architectures, relying on imperfect optimization methods, early stopping based on validation metrics, or using latent variable models such as VAEs. For some generative models, such as GANs, smoothness is important both from a learning landscape perspective, and from a gradient estimation variance perspective. Smoothness is also crucial for generalization, robustness and defenses against adversarial examples.