Research Team

Pioneering intelligent systems, with scientific rigour

Overview

The DeepMind Research team brings together multidisciplinary, collaborative teams to develop cutting-edge AI research. By combining extraordinary intellectual freedom and scientific rigour with access to top resources and a structured, supportive culture, we have established an unparalleled track record of AI breakthroughs.

Our team

Our pioneering scientists and engineers have taught agents to cooperate, play world-class chess, diagnose eye disease, and predict the complex 3D shapes of proteins. Combined with a strong focus on safety, ethics, and robustness, the team works to create systems that can provide extraordinary benefits to society.

Research themes

We bring together knowledge from diverse disciplines to research the entire spectrum of intelligence from deep learning and neuroscience to robotics and safety.

Control & robotics

General purpose learning systems must be able to cope with the richness and complexity of the real world. These topics drive the control and robotics teams at DeepMind, which aim to create mechanical systems that can learn how to perform complex manipulation tasks with minimal prior knowledge. The shared ambition is to create systems that are data-efficient, reliable, and robust.

Read our control & robotics publications

Significant breakthroughs

We're proud of our track record of breakthroughs in fundamental AI research, published in journals like Nature, Science, and more.

AlphaZero: Shedding new light on chess, shogi, and Go

AlphaZero is a single system that learned to play three famously complex games, becoming the strongest player in history for each. Learning entirely from scratch, AlphaZero developed its own distinctive style that continues to inspire human grandmasters.

DQN: Human-level control of Atari games

One of the great challenges in AI is building flexible systems that can take on a wide range of tasks. Our Deep Q-Network (DQN) surpassed the overall performance of professional players in 49 different Atari games using only raw pixels and the score as inputs.

A neural network with dynamic memory

The differentiable neural computer (DNC) learns to use its external memory to answer questions about different kinds of complex structured data, such as artificially generated stories, family trees, or a map of the London Underground.

WaveNet: A generative model for raw audio

WaveNet can generate realistic human-sounding speech that reduced the gap between computer and human performance by over 50%, when it was introduced.

AlphaGo defeats Lee Sedol in the game of Go

In the course of winning, AlphaGo taught the world completely new knowledge about perhaps the most studied and contemplated game in history.

GQN: Learning to see

The Generative Query Network (GQN) allows computers to learn about a generated scene purely from observation, much like how infants learn to understand the world.

Recent blog posts