The Deep Learning Lecture Series 2020

DeepMind x UCL

Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale.

Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.

Lectures

In this series, DeepMind Research Scientists and Research Engineers deliver 12 lectures on a range of topics in Deep Learning.

Lecture 1: Intro to Machine Learning & AI

DeepMind Research Scientist and UCL professor Thore Graepel explains DeepMind's machine learning based approach towards AI.

Lecture 2: Neural Networks Foundations

DeepMind Research Scientist Wojciech Czarnecki covers the basics of how neural networks operate, learn and solve problems.

Lecture 3: Convolutional Neural Networks

DeepMind Research Scientist Sander Dieleman takes a closer look at convolutional network architectures through several case studies.

Lecture 4: Advanced Models for Computer Vision

DeepMind Research Scientist Viorica Patraucean introduces classic computer vision tasks beyond image classification and describes state of the art models for each, together with standard benchmarks.

Lecture 5: Optimisation for Machine Learning

DeepMind Research Scientist James Martens covers the fundamentals of gradient-based optimisation methods, and their application to training neural networks.

Lecture 6: Sequences and Recurrent Networks

DeepMind Research Scientist Marta Garnelo focuses on sequential data and how machine learning methods have been adapted to process this particular type of structure.

Lecture 7: Deep Learning for Natural Language Processing

DeepMind Research Scientist Felix Hill discusses the motivation for modelling language with ANNs, and unsupervised and representation learning for language.

Lecture 8: Attention and Memory in Deep Learning

DeepMind Research Scientist Alex Graves covers a contemporary attention mechanisms, including the implicit attention present in any deep network.

Lecture 9: Generative Adversarial Networks

DeepMind Research Scientist Jeff Donahue & Research Engineer Mihaela Rosca discuss the theory behind GAN models, optimisation and improvements.

Lecture 10: Unsupervised Representation Learning

DeepMind Research Scientist Irina Higgins and DeepMind Research Engineer Mihaela Rosca give an overview the historical role of unsupervised representation learning.

Lecture 11: Modern Latent Variable Models

DeepMind Research Scientist Andriy Mnih explores latent variable models, a powerful and flexible framework for generative modelling.

Lecture 12: Responsible Innovation & Artificial Intelligence

DeepMind Research Scientists Chongli Qin and Iason Gabriel explore questions like: what can we do to build algorithms that are safe, reliable and robust?