DeepMind x UCL
Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale.
This lecture series, done in collaboration with University College London (UCL), serves as an introduction to the topic. Comprised of eight lectures, it covers the fundamentals of neural networks and optimsation methods through to natural language processing and generative models. This series was designed to complement the 2018 Reinforcement Learning lecture series. A newer version of the course, recorded in 2020, can be found here.
In this series, Research Scientists and Research Engineers from DeepMind deliver eight lectures on an range of topics in Deep Learning.
Lecture 1: Introduction to Machine Learning Based AI
Research Scientist Thore Graepel shares an introduction to machine learning based AI.
Lecture 2: Introduction to TensorFlow
Research Engineer Matteo Hessel & Software Engineer Alex Davies share an introduction to Tensorflow.
Lecture 3: Neural Networks Foundations
Research Scientist Simon Osindero shares an introduction to neural networks.
Lecture 4: Beyond Image Recognition
Senior Research Scientist Raia Hadsell discusses topics including end-to-end learning and embeddings.
Lecture 5: Optimisation for Machine Learning
Research Scientist James Martens explores optimisation for machine learning.
Lecture 6: Deep Learning for NLP
Research Scientist Ed Grefenstette gives an overview of deep learning for natural lanuage processing.
Lecture 7: Attention and Memory in Deep Learning
Research Scientist Alex Graves discusses the role of attention and memory in deep learning.
Lecture 8: Unsupervised learning and generative models
Research Scientist Shakir Mohamed gives an overview of unsupervised learning and generative models.