Privacy, transparency and fairness
Given that many AI systems rely on large-scale and sometimes sensitive datasets, questions about privacy, transparency and fairness will only grow in importance. Each of these themes represents a critical issue in its own right; they also intersect in complicated ways. For example, an automated decision-making system could undermine privacy through expansive use of personal information, or lock-in existing societal biases that appear in data used to train algorithms. Moreover, insufficient transparency about the sources of data, the functioning of the algorithms themselves, or surrounding organizational processes may hinder accountability.
Open questions include:
- Will the use of AI systems exacerbate conflict between the individual and society, given the potential benefits and risks of data use? How can these conflicts be mediated or resolved?
- How do important concepts such as data control, consent and ownership relate to the use of AI systems? How can these concepts be operationalized so that they guarantee freedoms for individuals and groups?
- What approaches are needed to fully understand biases in AI systems and data? What strategies should those designing AI systems use to counteract or minimise these effects?
- What forms of transparency are most useful for ensuring the beneficial use of AI? What policies and tools are required to allow for meaningful audits of AI systems and the data they use?