Solving intelligence through research.

We’re working on some of the world’s most complex and interesting research challenges, with the ultimate goal of solving intelligence. To do this, we’ve developed a new way to organise research that combines the long-term thinking and interdisciplinary collaboration of academia along with the relentless energy and focus of the very best technology start-ups. This approach is yielding rapid progress against a set of exceptionally tough scientific problems, with our team achieving two Nature front covers in under a year, receiving numerous awards, and publishing over 100 peer-reviewed papers. And we’re hiring!

Highlighted Research

Awards

Featured Publications

View All Publications
Nature 2016

Hybrid computing using a neural network with dynamic external memory

Authors: A Graves, G Wayne, M Reynolds, T Harley, I Danihelka, A Grabska-Barwinska, S Gomez Colmenarejo, E Grefenstette, T Ramalho, J Agapiou, A Puigdomènech, K M Hermann, Y Zwols, G Ostrovski, A Cain, H King, C Summerfield, P Blunsom, K Kavukcuoglu, D Hassabis

Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory.

View Publication Blog Post
arXiv 2016

WaveNet: A Generative Model for Raw Audio

Authors: A van den Oord, S Dieleman, H Zen, K Simonyan, O Vinyals, A Graves, N Kalchbrenner, A Senior, K Kavukcuoglu

This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of-the-art performance, with human listeners rating it as significantly morenatural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.

View Publication Blog Post

Nature 2016

Mastering the game of Go with Deep Neural Networks & Tree Search

Authors: D Silver, A Huang, C J Maddison, A Guez, L Sifre, G van den Driessche, J Schrittwieser, I Antonoglou, V Panneershelvam, M Lanctot, S Dieleman, D Grewe, J Nham, N Kalchbrenner, I Sutskever, T Graepel, T Lillicrap, M Leach, K Kavukcuoglu, D Hassabis

Nature 2015

Human Level Control Through Deep Reinforcement Learning

Authors: V Mnih, K Kavukcuoglu, D Silver, A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A Fidjeland, G Ostrovski, S Petersen, C Beattie, A Sadik, I Antonoglou, H King, D Kumaran, D Wierstra, S Legg, D Hassabis

View All Publications

Latest Research News

More Research News

Working at DeepMind

We're looking for exceptional people.

Meet some of the team

Andreas Fidjeland

Andreas is our Head of Research Engineering and joined DeepMind in 2012. One of his earliest memories of DeepMind is having meetings on the 'meeting picnic blanket' in Russell Square, after having run out of space in the first office! Previously Andreas was a postdoc at Imperial College London, working on spiking neural network simulations using GPUs in the Cognitive Robotics Lab. His team works to accelerate the research programme at DeepMind by providing the software used across all research projects, as well as directly working on research projects. Andreas’ main focus is making sure his team is getting to work on interesting problems and that the research team functions smoothly and has the tools and support it needs. He says DeepMind is a “great collaborative environment and the best place to be at the forefront of developments in AI.”

Raia Hadsell

Raia is a Senior Research Scientist working on Deep Learning at DeepMind, with a particular focus on solving robotics and navigation using deep neural networks. She joined DeepMind following positions at Carnegie Mellon and SRI International as she saw the combination of research into games, neuroscience, deep learning and reinforcement learning as a unique proposition that could lead to fundamental breakthroughs in AI. She says that one of her favourite moments at DeepMind was watching the livestream of Lee Sedol playing AlphaGo at 4am surrounded by the rest of the team, despite the difference in timezone!

Frederic Besse

Frederic joined as a Research Engineer in July 2015. Prior to DeepMind, he was a research engineer at the Foundry, a VFX software company. Frederic’s job is to accelerate research, and take the lead on the engineering side of projects. He mainly focuses on generative models, which is a family of models belonging to the field of unsupervised learning. He describes his job as trying to teach a computer to process data like the human brain: "To dream and imagine things that it has never seen before. One way to achieve this is to show the computer a lot of data and let it figure out why things look like they do.” Frederic joined DeepMind to be a part of our exciting and challenging mission to solve intelligence. His favourite DeepMind memory was watching the AlphaGo vs Lee Sedol match: “The suspense and atmosphere in the office was amazing.”