AI morality and values
For AI systems to be used in the service of society, they will need to make recommendations or decisions that align with ethical norms and values. However, it is a huge challenge to specify what we mean by human values, let alone take the technical steps needed to incorporate them into an AI system. Any discussion of morality must also take into account the different values held by different people and groups, and the risk of endorsing values held by a majority which may lead to discrimination against minorities.
Open questions include:
- What are the relevant ethical approaches for answering questions related to AI morality? Is there one approach or many?
- How can we ensure that the values designed into AI systems are truly reflective of what society wants, given that preferences change over time, and people often have different, contradictory, and overlapping priorities?
- How can insights into shared human values be translated into a form suitable for informing AI design and development?