When I was studying in the mid-90s as an undergraduate, there was very little active engagement between the academic communities pushing the boundaries of maths and science, and the industries that many students ended up going into, such as finance. This struck me as a missed opportunity. While private institutions benefited from the technological advances being driven by university researchers, the subsequent breakthroughs they made were rarely shared for mutual benefit between the two.
In contrast, we often talk about DeepMind’s research environment as a hybrid culture that blends the long-term scientific thinking of academia with the speed and focus of the best start-ups. This alignment with academia has always been important to us personally, given how many of our team come from that background, as well as the fact that many of the core ideas behind machine learning were invented and developed by academic pioneers including the likes of Geoff Hinton and Rich Sutton.
This is a major reason why we openly publish our research - including over 100 peer-reviewed papers to date - and regularly present at industry-wide gatherings such as NIPS. Last month in Barcelona we published 20 papers, participated in 42 poster sessions, gave 21 talks, and open-sourced our flagship DeepMind Lab research platform - and there’s a lot more to come.
We also want to make a more direct contribution to academic learning and training the next generation of machine learning practitioners, and so, starting this month, we’ll be running a state-of-the-art Masters level training module called Advanced Topics in Machine Learning with University College London’s (UCL) Department of Computer Science. Led by DeepMind’s Thore Graepel, other invited speakers will include leading researchers spanning areas such as deep learning, reinforcement learning, natural language understanding and others. Hado van Hasselt, Joseph Modayil, Koray Kavukcuoglu, Raia Hadsell, James Martens, Oriol Vinyals, Shakir Mohamed, Simon Osindero, Ed Grefenstette and Karen Simonyan will be joined by Volodymyr Mnih, David Silver and Alex Graves - who are also some of the first authors of DeepMind’s three Nature papers.
January also sees the start of our Deep Learning for Natural Language Processing advanced course at the University of Oxford’s Department of Computer Science. This applied course, focusing on recent advances in analysing and generating speech and text using recurrent neural networks, is led by Phil Blunsom in partnership with DeepMind’s Language Research Group, and open to fourth year undergraduates, Masters, and first year DPhil (PhD) students. Both of these courses run in addition to the international summer schools that our team members regularly teach at, with events taking place this year in Germany, China and South Africa among other locations.
We also make sure that people who come to work here can continue to make their own personal contribution to academia. A number of our team are also affiliated with various institutions including UCL, Oxford, Cambridge, MIT and the universities of Freiburg and Lille, among others.
Finally, we think it’s important for the field that there are as many thriving independent academic institutions as possible. That’s why we’re providing sponsorship for several research labs and their PhD students to pursue their own research priorities in whichever way they choose, including the University of Alberta, University of Montreal, University of Amsterdam, Gatsby Unit at UCL, NYU and Oxford, and others.
We see the links between company research labs and academia as central to the future of AI. By continuing to share talent, expertise and breakthroughs - not just on technical subjects, but also on the broader set of questions around ethics, safety and societal impact - we believe we’ll all make better progress in the development of artificial intelligence and its application for positive social benefit.