DeepMind Health and de-personalised research data
We're proud to be undertaking research partnerships with world-class hospital groups,exploring whether AI techniques can be used effectively and safely to support nurses and doctors. The results of this work will be subject to rigorous clinical scrutiny, and will be published in peer-reviewed academic journals.
To do this, we process de-personalised patient data, with identifiable features - such as name or patient number, for example - removed.
De-personalised research data, and how it is used
The identifying features that could connect a piece of information to an individual are removed, before the data is used for research.
Our hospital partners decide which types of data are used for research.
In our collaboration with Moorfields, we have been sent one million de-personalised digital eye scans. We use these scans to explore whether AI tools can learn to safely and effectively identify conditions that cause sight loss.
In our collaboration with UCLH, we will work with up to 800 de-personalised CT scans from 500 former head and neck cancer patients. These scans are being analysed to explore whether AI tools can help reduce the amount of time it takes to plan radiotherapy treatment.
The rules governing our research on de-personalised data
Even though this data is de-personalised, it is still subject to strict controls. All data is governed by our hospital partners, and vetted by their information governance teams before being sent to DeepMind Health.
Data custodians have been appointed for both partnerships. Their role is to rigorously control the access to the data. Only those who require access to the research data for the purpose of the project are granted access, and all researchers and engineers involved in the study are required to complete thorough training before research work can begin