Learning to Transduce with Unbounded Memory

NIPS 2015

Recently, strong results have been demonstrated by Deep Recurrent Neural Networks on natural language transduction problems. In this paper we explore the representational power of such models using synthetic grammars designed to exhibit phenomena similar to that found in real transduction problems, such as machine translation. These experiments lead us to propose new memory-based recurrent networks that implement continuously differentiable analogues of the traditional data structures Stacks, Queues, and DeQues. We show that these architectures exhibit superior generalisation performance to Deep RNNs on our transduction experiments and often learn the underlying generating algorithm.

View Publication

Latest research news

View the blog