Publications

Author arrow
  • - ALL -
  • A Barreto
  • A Barto
  • A Chua
  • A De Maria
  • A Fidjeland
  • A Graves
  • A Gretton
  • A Gruslys
  • A Guez
  • A H Marblestone
  • A Harutyunyan
  • A Huang
  • A Lavie
  • A Lerchner
  • A Matthews
  • A Mnih
  • A Nair
  • A Pal
  • A Pritzel
  • A Puigdomènech Badia
  • A Ruderman
  • A Rusu
  • A Sadik
  • A Saleem
  • A Santoro
  • A Senior
  • A Shashua
  • A Solway
  • A Van Den Oord
  • A Vedaldi
  • A Vezhnevets
  • A Vlachos
  • A Weinstein
  • A Zisserman
  • B Macwhinney
  • B Uria
  • B Van Roy
  • C Alcicek
  • C Barry
  • C Beattie
  • C Blundell
  • C Dyer
  • C Fernando
  • C Gulcehre
  • C J Maddison
  • C Summerfield
  • D Addis
  • D Banarse
  • D Bray
  • D Grewe
  • D Hassabis
  • D Horgan
  • D J Rezende
  • D Kingma
  • D Kumaran
  • D L Schacter
  • D Pfau
  • D Roth
  • D Saxton
  • D Schacter
  • D Silver
  • D Szepesvpari
  • D Wierstra
  • E Belilovsky
  • E Grefenstette
  • E Schlinger
  • E Talvitie
  • F Besse
  • F Wang
  • G Desjardins
  • G E Hinton
  • G Lever
  • G Mulcaire
  • G Ostrovski
  • G Van Den Driessche
  • G Wayne
  • H J Spiers
  • H King
  • H Soyer
  • H Spiers
  • H Van Hasselt
  • H Ólafsdóttir
  • I Antonoglou
  • I Danihelka
  • I Higgins
  • I Osband
  • I Sutskever
  • J Agapiou
  • J Appleyard
  • J Balaguer
  • J Cornebise
  • J De Fauw
  • J Foerster
  • J Heinrich
  • J J Hunt
  • J Kirkpatrick
  • J Lei Ba
  • J Leike
  • J Nham
  • J Quan
  • J Rae
  • J Schrittwieser
  • J Schulman
  • J Veness
  • J Z Leibo
  • Jl Mcclelland
  • K Gregor
  • K Kavukcuoglu
  • K Kurach
  • K M Hermann
  • K P Kording
  • K Simonyan
  • K Szpunar
  • L Espeholt
  • L Kaiser
  • L Li
  • L Matthey
  • L Orseau
  • L Sifre
  • L Theis
  • M Andrychowicz
  • M Ballesteros
  • M Bethge
  • M Blaschko
  • M Botvinick
  • M Denil
  • M Faruqui
  • M Fraccaro
  • M G Bellemare
  • M Hessel
  • M Hutter
  • M J Chadwick
  • M Jaderberg
  • M Kudlur
  • M Lanctot
  • M Leach
  • M Luong
  • M Mirza
  • M Moczulski
  • M Reynolds
  • M Riedmiller
  • M Suleyman
  • M W Hoffman
  • M Welling
  • N De Freitas
  • N Heess
  • N Jaitly
  • N Kalchbrenner
  • N Korda
  • N Rabinowitz
  • N Spreng
  • Na Smith
  • O Vinyals
  • Ole Winther
  • P Abbeel
  • P Battaglia
  • P Blunsom
  • P Dayan
  • P L A
  • P Mirowski
  • P S Thomas
  • P Srinivasan
  • Q Le
  • R Brooks
  • R Fearon
  • R Hadsell
  • R Munos
  • R Pascanu
  • R S Anjum
  • S Armstrong
  • S Bartunov
  • S Bengio
  • S Blackwell
  • S Dieleman
  • S Gomez Colmenarejo
  • S Gu
  • S K Sønderby
  • S Legg
  • S Levine
  • S M A Eslami
  • S Mohamed
  • S Osindero
  • S Petersen
  • S Reed
  • S Srinivasan
  • S Upadhyay
  • S Whiteson
  • T Degris
  • T Erez
  • T Graepel
  • T Harley
  • T Kočiský
  • T Lattimore
  • T Lillicrap
  • T Rocktäschel
  • T Schaul
  • T Stepleton
  • T Weber
  • U Paquet
  • V Martin
  • V Mnih
  • V Panneershelvam
  • W Ammar
  • W Bounliphone
  • W Kay
  • W Ling
  • W M Czarnecki
  • X Glorot
  • Y Li
  • Y M Assael
  • Y Tassa
  • Y Tsvetkov
  • Y Whye Teh
  • Z Wang
104 publications

Nature 2016

Mastering the game of Go with Deep Neural Networks & Tree Search

Authors: D Silver, A Huang, C J Maddison, A Guez, L Sifre, G van den Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, S Dieleman, D Grewe, J Nham, N Kalchbrenner, I Sutskever, T Lillicrap, T Graepel, M Leach, K Kavukcuoglu, D Hassabis

A new approach to computer Go that combines Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning, from games of self-play. The first time ever that a computer program has defeated a human professional player.

NIPS 2016

Learning to Communicate with Deep Multi-Agent Reinforcement Learning

Authors: J Foerster, Y M Assael, N de Freitas, S Whiteson

We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains.

NIPS 2016

Unsupervised Learning of 3D Structure from Images

Authors: D J Rezende, S M A Eslami, S Mohamed, P Battaglia, M Jaderberg, N Heess

A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.

NIPS 2016

Sequential Neural Models with Stochastic Layers

Authors: M Fraccaro, S K Sønderby, U Paquet, Ole Winther

How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model’s posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over the uncertainty in a latent path, like a state space model, we improve the state of the art results on the Blizzard and TIMIT speech modeling data sets by a large margin, while achieving comparable performances to competing methods on polyphonic music modeling.

NIPS 2016

Conditional Image Generation with PixelCNN Decoders

Authors: A van den Oord, N Kalchbrenner, O Vinyals, L Espeholt, A Graves, K Kavukcuoglu

This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.

NIPS 2016

Strategic Attentive Writer for Learning Macro-Actions

Authors: A Vezhnevets, V Mnih, J Agapiou, S Osindero, A Graves, O Vinyals, K Kavukcuoglu

We present a novel deep recurrent neural network architecture that learns to build implicit plans in an end-to-end manner by purely interacting with an environment in reinforcement learning setting. The network builds an internal plan, which is continuously updated upon observation of the next input from the environment. It can also partition this internal representation into contiguous sub- sequences by learning for how long the plan can be committed to – i.e. followed without re-planing. Combining these properties, the proposed model, dubbed STRategic Attentive Writer (STRAW) can learn high-level, temporally abstracted macro- actions of varying lengths that are solely learnt from data without any prior information. These macro-actions enable both structured exploration and economic computation. We experimentally demonstrate that STRAW delivers strong improvements on several ATARI games by employing temporally extended planning strategies (e.g. Ms. Pacman and Frostbite). It is at the same time a general algorithm that can be applied on any sequence data. To that end, we also show that when trained on text prediction task, STRAW naturally predicts frequent n-grams (instead of macroactions), demonstrating the generality of the approach.

NIPS 2016

Learning to Learn by Gradient Descent by Gradient Descent

Authors: M Andrychowicz, M Denil, S Gomez Colmenarejo, M W Hoffman, D Pfau, T Schaul, N de Freitas

The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.

NIPS 2016

Matching Networks for One Shot Learning

Authors: O Vinyals, C Blundell, T Lillicrap, K Kavukcuoglu, D Wierstra

Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.

NIPS 2016

Memory-Efficient Backpropagation through Time

Authors: A Gruslys, R Munos, I Danihelka, M Lanctot, A Graves

We propose a novel approach to reduce memory consumption of the backpropagation through time (BPTT) algorithm when training recurrent neural networks (RNNs). Our approach uses dynamic programming to balance a trade-off between caching of intermediate results and recomputation. The algorithm is capable of tightly fitting within almost any user-set memory budget while finding an optimal execution policy minimizing the computational cost. Computational devices have limited memory capacity and maximizing a computational performance given a fixed memory budget is a practical use-case. We provide asymptotic computational upper bounds for various regimes. The algorithm is particularly effective for long sequences. For sequences of length 1000, our algorithm saves 95%% of memory usage while using only one third more time per iteration than the standard BPTT.

NIPS 2016

Safe and Efficient Off-Policy Reinforcement Learning

Authors: R Munos, T Stepleton, A Harutyunyan, M G Bellemare

In this work, we take a fresh look at some old and new algorithms for off-policy, return-based reinforcement learning. Expressing these in a common form, we derive a novel algorithm, Retrace(λ), with three desired properties: (1) low variance; (2) safety, as it safely uses samples collected from any behaviour policy, whatever its degree of "off-policyness"; and (3) efficiency, as it makes the best use of samples collected from near on-policy behaviour policies. We analyse the contractive nature of the related operator under both off-policy policy evaluation and control settings and derive online sample-based algorithms. To our knowledge, this is the first return-based off-policy control algorithm converging a.s. to Q∗ without the GLIE assumption (Greedy in the Limit with Infinite Exploration). As a corollary, we prove the convergence of Watkins' Q(λ), which was still an open problem. We illustrate the benefits of Retrace(λ) on a standard suite of Atari 2600 games.

NIPS 2016

Unifying Count-Based Exploration and Intrinsic Motivation

Authors: M G Bellemare, S Srinivasan, G Ostrovski, T Schaul, D Saxton, R Munos

We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use sequential density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary sequential density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma's Revenge.

NIPS 2016

Towards Conceptual Compression

Authors: K Gregor, F Besse, D J Rezende, I Danihelka, D Wierstra

We introduce a simple recurrent variational auto-encoder architecture that significantly improves image modeling. The system represents the state-of-the-art in latent variable models for both the ImageNet and Omniglot datasets. We show that it naturally separates global conceptual information from lower level details, thus addressing one of the fundamentally desired properties of unsupervised learning. Furthermore, the possibility of restricting ourselves to storing only global information about an image allows us to achieve high quality 'conceptual compression'.

NIPS 2016

Deep Exploration via Bootstrapped DQN

Authors: I Osband, C Blundell, A Pritzel, B Van Roy

Efficient exploration in complex environments remains a major challenge for reinforcement learning. We propose bootstrapped DQN, a simple algorithm that explores in a computationally and statistically efficient manner through use of randomized value functions. Unlike dithering strategies such as epsilon-greedy exploration, bootstrapped DQN carries out temporally-extended (or deep) exploration; this can lead to exponentially faster learning. We demonstrate these benefits in complex stochastic MDPs and in the large-scale Arcade Learning Environment. Bootstrapped DQN substantially improves learning times and performance across most Atari games.

NIPS 2016

Learning values across many orders of magnitude

Authors: H van Hasselt, A Guez, M Hessel, V Mnih, D Silver

Most learning algorithms are not invariant to the scale of the function that is being approximated. We propose to adaptively normalize the targets used in learning. This is useful in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were all clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using the adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance.

NIPS 2016

Attend, Infer, Repeat: Fast Scene Understanding with Generative Models

Authors: S M A Eslami, N Heess, T Weber, Y Tassa, D Szepesvpari, K Kavukcuoglu, G E Hinton

We present a framework for efficient inference in structured image models that explicitly reason about objects. We achieve this by performing probabilistic inference using a recurrent neural network that attends to scene elements and processes them one at a time. Crucially, the model itself learns to choose the appropriate number of inference steps. We use this scheme to learn to perform inference in partially specified 2D models (variable-sized variational auto-encoders) and fully specified 3D models (probabilistic renderers). We show that such models learn to identify multiple objects - counting, locating and classifying the elements of a scene - without any supervision, e.g., decomposing 3D images with various numbers of objects in a single forward pass of a neural network. We further show that the networks produce accurate inferences when compared to supervised counterparts, and that their structure leads to improved generalization.

ACL 2016

Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning

Authors: Y Tsvetkov, M Faruqui, W Ling, B MacWhinney, C Dyer

We use Bayesian optimization to learn curricula for word representation learning, optimizing performance on downstream tasks that depend on the learned representations as features. The curricula are modeled by a linear ranking function which is the scalar product of a learned weight vector and an engineered feature vector that characterizes the different aspects of the complexity of each instance in the training corpus. We show that learning the curriculum improves performance on a variety of downstream tasks over random orders and in comparison to the natural corpus order.

ACL 2016

Many Languages, One Parser

Authors: W Ammar, G Mulcaire, M Ballesteros, C Dyer, NA Smith

We train one multilingual model for dependency parsing and use it to parse sentences in several languages. The parsing model uses (i) multilingual word clusters and embeddings; (ii) token-level language information; and (iii) language-specific features (fine-grained POS tags). This input representation enables the parser not only to parse effectively in multiple languages, but also to generalize across languages based on linguistic universals and typological similarities, making it more effective to learn from limited annotations. Our parser's performance compares favorably to strong baselines in a range of data scenarios, including when the target language has a large treebank, a small treebank, or no treebank for training.

ACL 2016

Synthesizing Compound Words for Machine Translation

Authors: A Matthews, E Schlinger, A Lavie, C Dyer

Most machine translation systems construct translations from a closed vocabulary of target word forms, posing problems for translating into languages that have productive compounding processes. We present a simple and effective approach that deals with this problem in two phases. First, we build a classifier that identifies spans of the input text that can be translated into a single compound word in the target language. Then, for each identified span, we generate a pool of possible compounds which are added to the translation model as “synthetic” phrase translations. Experiments reveal that (i) we can effectively predict what spans can be compounded; (ii) our compound generation model produces good compounds; and (iii) modest improvements are possible in end-to-end English–German and English–Finnish translation tasks. We additionally introduce KomposEval, a new multi-reference dataset of English phrases and their translations into German compounds.

ACL 2016

Cross-lingual Models of Word Embeddings

Authors: M Faruqui, S Upadhyay, C Dyer, D Roth

Despite interest in using cross-lingual knowledge to learn word embeddings for various tasks, a systematic comparison of the possible approaches is lacking in the literature. We perform an extensive evaluation of four popular approaches of inducing cross-lingual embeddings, each requiring a different form of supervision, on four typographically different language pairs. Our evaluation setup spans four different tasks, including intrinsic evaluation on mono-lingual and cross-lingual similarity, and extrinsic evaluation on downstream semantic and syntactic applications. We show that models which require expensive cross-lingual knowledge almost always perform better, but cheaply supervised models often prove competitive on certain tasks.

ACL 2016

Latent Predictor Networks for Code Generation

Authors: W Ling, E Grefenstette, K M Hermann, T Kočiský, A Senior, F Wang, P Blunsom

Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks.