“the AI will see you now” Is privacy the price of AI in healthcare?

No comments

Ever since the 1970s there has been a fascination with artificial intelligence, or AI. In the last decade or so, “smart” technology has developed and become a part of our everyday lives; many of us are the proud owners of pocket-size devices which can compute, play and contact at the touch of a button. But while the potential applications of AI in medicine are vast, exciting, and could potentially transform healthcare, the technology has not yet managed to become a widespread phenomenon. This could be set to change as technology is refined and becomes more accurate – but there are many other issues to be addressed, not least privacy.

DeepMind is a London based AI technology founded in 2010. The founders’ aim was to create an AI that could learn to perform almost any task. It differs from other AI technologies in that it is not pre-programmed to perform its function, but learns from experience using technology called “deep reinforcement learning”. The company grew, so much so that it was acquired by Google in 2014, and has since attempted to bring AI to the forefront of every field it can.

DeepMind has already made significant breakthroughs in implementing its technology in medicine. Since 2016 several London hospitals including UCLH and the Royal Free have struck partnerships with the company in order to trial their ideas on a large scale. This has led to some fantastic developments. Notably, in its collaboration with Moorfields Eye Hospital, DeepMind technology was able to identify eye diseases and make the correct referral in 94% of cases – just as accurately as a top clinician. The technology is currently awaiting validation for clinical trials published in Nature; this could potentially entail Moorfields to use the AI across their 30 UK hospitals for five years initially. If all goes to plan this ground-breaking technology could see eye treatment revolutionised.

In 2017 a partnership with the Royal Free began, in which DeepMind’s “Streams” app started to be used in the detection of acute kidney injury (AKI). Having consulted staff at the Royal Free, researchers identified issues and frustrations with the current system of detecting AKI. For example, the current NHS approved algorithm for AKI detection generates false positives for chronic (rather than acute) patients, and doesn’t take into account some key factors such as age and how long each patient has been in hospital for. Furthermore, the dated technology still prevalent in the NHS means that clinicians may have to wait to log on to a shared computer to access test results, causing avoidable delays in treatment of AKI. DeepMind consequently developed Streams, a mobile app to alert clinicians immediately when a patient’s vital signs begin deteriorating. The app has been a huge hit with staff, with nurses praising its efficiency in identifying deteriorating patients and the time saved as a result.

However, DeepMind’s intervention was not controversy-free. In July 2017, the Royal Free was found to have breached the Data Protection Act by sharing 1.6 million patients’ data with DeepMind during the app’s development. It was found that patients whose data was accessed would not have “reasonably expected” their data to be shared with a third party for use in this manner, especially since the breadth of data used was so large. Nevertheless, the app was still allowed to be used and DeepMind itself was not directly criticised for its actions as it was the hospital who handed the data over.

The real concern came in November this year, when, having previously promised that NHS “data will never be connected to Google accounts or services”, DeepMind Health became part of Google’s new California-based “Google Health” division. Although DeepMind bosses insist that nothing has changed in terms of data processing, this is one step towards Google potentially accessing and using patient data as a commercial asset. Worryingly, the restructure also meant that Google terminated the review panel set up to scrutinise DeepMind’s work with the NHS, as it is now “part of a global effort” rather than “a UK entity”.

It is clear that this technology has enormous potential and could be harnessed in healthcare to transform systems and even diagnosis of patients. Initial trials have been resounding successes and the AI will only continue to grow in capability. But despite how many positives it may produce, it is important to be rigorous in protecting patient data and not let it fall into the wrong hands. Some feel it has already gone too far, and the most recent developments will have done nothing to ease those fears. The future prospects for DeepMind Health are undoubtedly exciting, but sufficient privacy safeguards must be put in place before it can progress further.

By Kushal Varma