Governance and accountability

The creation and use of powerful new technologies requires effective governance and regulation to ensure their deployment is safe, equitable, and accountable. In the case of AI, new standards or institutions may be needed to oversee its use by individuals, states and the private sector - both within national borders and internationally. This raises important questions about how these ought to be designed so that they are legitimate and effective, upholding the rights of everyone affected by AI.

Open questions include:

  • How will the increasing use and sophistication of AI technologies interact with corporate power? What mechanisms are needed to ensure that private sector use of AI upholds the public interest?
  • How can governments, civil society, and other organisations keep pace with the development of AI technologies? What skills and collaborations are required for this?
  • What precedents from other scientific fields, such as biotechnology or genetics, can help inform the debate on the regulation and application of AI?
  • What standards should exist around the technical safety of AI systems, and how could these be defined and enforced?

More from DeepMind Ethics & Society