‘I like to take things slow. Take it slowly and get it right first time,’ one participant said, but was quickly countered by someone else around the table: ‘But I’m impatient, I want to see the benefits now.’ This exchange neatly captures many of the conversations I heard at DeepMind Health’s recent Collaborative Listening Summit. It also represents, in layman’s terms, the debate that tech thinkers and policy-makers are having right now about the future of artificial intelligence.
The Collaborative Listening Summit brought together members of the public, patient representatives and stakeholder, and was facilitated by Ipsos MORI. The objective of the Summit: to explore how principles, co-created in earlier events with the public, patients and stakeholders, should govern DeepMind Health’s operating practices and engagement with the NHS. These principles ranged from the technical – for example, how evidence should inform DeepMind’s practice – to the societal – for example, operating in the best interests of society.
The challenge of how technology companies and the NHS should interact has had many of us, including myself, cautious about the risk of big technology firms leveraging their finance and power over an NHS that is under seemingly endless pressure. Despite our desire to see the NHS become more agile and innovative, the ‘move fast and break things’ mentality of big tech is something that the public and policy-makers are rightly wary of when lives are at stake.
These fears usually manifest themselves in headlines about protecting patient data, but the issues run far deeper than this. For example, algorithms have an increasingly large part to play in our everyday lives, deciding which film we might like to watch next or the adverts we see online, but undergo relatively little scrutiny. In health care specifically, algorithms are beginning to augment existing services, with NHS England predicting that algorithms will soon handle one in three NHS 111 enquiries. Algorithms currently in place are relatively simple, and they need to be tested to ensure they are safe and effective. As the technology acquires greater decision-making capability, testing will become more challenging and more important. Frankly, regulators and policy-makers are still getting up to speed with these issues, so a cautious approach to regulation is sensible.
There is evidently a need for policy-makers to further deepen their understanding of what rules of engagement should be for tech companies. Initiatives like the government’s new Centre for Data Ethics and Innovation, intended to promote ethical data innovation, provide a platform for this, as do the trailblazing efforts of the biomedical research community. However, if recent public trust traps like GM foods and care.data and even the Royal Free/DeepMind data-sharing agreement are to be avoided big tech companies must themselves take responsibility for understanding what their relationship with NHS organisations should be.
DeepMind’s past attempts to engage have been subject to some criticism and show how challenging, yet important, it is to get dialogue with the public right. However, I was reassured that the Summit was designed to impart understanding; researching what people think about an issue where DeepMind staff themselves are on uncertain ground. Of course, for this to make any difference, the research needs to be applied – and that means practising the principles in the real world.
Principles such as ‘do what is best for society’ might seem trite and obvious especially as, in the real world, contracts may be put above ethics, as participants at the event rightly pointed out. So, if DeepMind and other big tech companies are to hold themselves to ethical principles around testing, transparency and societal benefit, they must show that these principles translate into ethical business practice. This will mean not working with NHS organisations where they feel they cannot hold themselves to these principles.
And this will get harder as time goes on – currently there is more than enough interest in machine learning to keep DeepMind and its competitors busy. But as these technologies come on-stream and there is market share to be gained, it will become more and more tempting for DeepMind to break these principles, especially as other technology companies eye up the NHS for size. This is where policy re-enters the picture: data ethics initiatives need to catch up with this work and encourage other companies to undertake robust research with the public. After all, the NHS cannot become a world leader in artificial intelligence if doctors and managers cannot look patients in the eye and assure them that their data is not being used in ways that are contrary to their own values.
Participants at the workshop were cautious, especially around the need to safeguard their data, but they could also see the benefits that new technologies could have in terms of saving time and lives. Tech companies should not be scared of engaging with the public – but they must listen to the people whose lives they claim to be bettering.