Managing AI risk: misuse and unintended consequences
AI systems are likely to play a role in every sector of the economy, and have a profound effect on our social and political institutions. Even if these systems generally improve decision-making and outcomes, they may still carry risk of unintended consequences or malfunction. Equally, they may be used for unethical purposes or relied upon too heavily in situations that go beyond their capabilities. The field of technical AI safety is starting to make progress on these topics, and carries with it an accompanying set of important policy and ethics questions, which our research will help address.
Open questions include:
- What are the primary risks to society of failure for AI systems, and how can these risks be monitored and addressed at scale?
- What new societal risks emerge when different AI systems begin to interact with each other and with existing human systems, and how can we ensure that people remain in control?
- How can the dangerous application of AI technologies to warfare be restricted or prevented? What systems need to be put in place to stop the development and deployment of fully autonomous weapons?
- What can be done to foster the safe and ethical implementation of AI, given its dual use nature and given that relevant research is widely published and replicable?