Jump to Content

About

Responsibility & Safety

We want AI to benefit the world, so we must be thoughtful about how it’s built and used

Our approach

AI can provide extraordinary benefits, but like all technology, it can have negative impacts unless it’s developed and deployed responsibly.

Guided by our AI Principles, we work to anticipate and evaluate our systems against a broad spectrum of AI-related risks, taking a holistic approach to responsibility and safety. Our approach is centered around responsible governance, responsible research and responsible impact.

To empower teams to pioneer responsibly and safeguard against harm, the Responsibility and Safety Council (RSC), our longstanding internal review group co-chaired by our COO Lila Ibrahim and Senior Director of Responsibility Helen King, evaluates Google DeepMind’s research, projects and collaborations against our AI Principles, advising and partnering with research and product teams on our highest impact work. Our AGI Safety Council, led by our Co-Founder and Chief AGI Scientist Shane Legg, works closely with the RSC, to safeguard our processes, systems and research against extreme risks that could arise from powerful AGI systems in the future. We’ve also signed public commitments to ensure safe, secure and trustworthy AI, statements urging mitigation of AI risks to society, and pledges against using our technologies for lethal autonomous weapons.

Members of the DeepMind team holding a discussion in a meeting room.

We also have world class teams focusing on technical safety, ethics, governance, security, and public engagement, who work to grow our collective understanding of AI-related risks and potential mitigations. Leading the industry forward, our recent research includes developing best practices for data enrichment and frameworks for evaluating general-purpose models against novel threats and ethical and social risks.

A satellite in orbit above the Earth.

AI that benefits everyone

Responsibility and safety issues go well beyond any one organization. Our teams work with many brilliant non-profits, academics, and other companies to apply AI to solve problems that underpin global challenges, while proactively mitigating risks. We support open research and investigation into the wider impacts of AI. To help prevent the misuse of our technologies, in 2023 we established the cross-industry Frontier Model Forum to ensure safe and responsible development of frontier AI models.

We collaborate with other leading research labs, as well as the Partnership on AI — which we helped co-found to bring together academics, charities, and company labs to solve common challenges.

To ensure AI benefits everyone, we also believe that the people building it must reflect and engage with diverse communities. So we’re working with universities on scholarships for people from underrepresented backgrounds, partnering with the Raspberry Pi Foundation to develop lesson plans for teachers and support community efforts such as Women in Machine Learning and the African Deep Learning Indaba. We’re also part of a collaborative project with the Commonwealth to support small states in developing responsible AI.

White wind turbines on a cloudy day. They are partially obscured due to the clouds sweeping low past them.

Responsibility

Our principles

Learn more

Explore our other teams and product areas