Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at DeepMind investigates questions related to objective specification, robustness, and trust in machine learning systems, including building formal evidence for the correctness and safety of these systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of DeepMind Research: to build safe and socially beneficial AI systems, and ultimately to ensure beneficial deployment of AGI.
Research on technical AGI safety draws on expertise in machine learning, theorem proving, and foundations of agent models. Research Scientists on this team work on the forefront of technical approaches to designing systems that reliably function as intended, while discovering and mitigating possible long-term risks, in close collaboration with other AI research groups within and outside of DeepMind.
- Identify and investigate possible failure modes for current and future AI systems, and proactively develop solutions to address them
- Create techniques and tools for specifying and verifying the correctness of AI systems and their supporting infrastructure
- Conduct empirical or theoretical research into technical safety mechanisms for AI systems in coordination with the team’s broader technical agenda
- Collaborate with research teams within and outside of DeepMind to ensure that AI capabilities research is informed by and adheres to the most advanced safety research and protocols
- Report and present research findings and developments to internal and external collaborators with effective written and verbal communication
- PhD in a technical field or equivalent practical experience
- PhD in formal verification, machine learning, computer science, or mathematics
- Experience with deep learning, reinforcement learning, or other areas of machine learning
- Relevant research experience with interactive theorem proving (e.g., HOL4, Isabelle, Coq, etc.), formal modelling, and/or verification of software and hardware systems
We are also accepting applications for internships.
Competitive salary applies.
DeepMind welcomes applications from all sections of society. We are committed to equal employment opportunity regardless of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.