Privacy, transparency, and fairness
AI systems can use large-scale and sometimes sensitive datasets, such as medical or criminal justice records. This raises important questions about protecting people’s privacy and ensuring that they understand how their data is used. Also, the data used for training automated decision-making systems can contain biases, creating systems that might discriminate against certain groups of people.
- How do concepts such as consent and ownership relate to using data in AI systems?
- What can AI researchers do to detect and minimise the effects of bias?
- What policies and tools allow meaningful audits of AI systems and their data?