Google’s AI Can Predict When A Patient Will Die | AI
AI knows when you’re going to die. But unlike in sci-fi movies, that information could end up saving lives.
A new paper published in Nature suggests that feeding electronic health record data to a deep learning model could substantially improve the accuracy of projected outcomes. In trials using data from two U.S. hospitals, researchers were able to show that these algorithms could predict a patient’s length of stay and time of discharge, but also the time of death.
The neural network described in the study uses an immense amount of data, such as a patient’s vitals and medical history, to make its predictions. A new algorithm lines up previous events of each patient’s records into a timeline, which allowed the deep learning model to pinpoint future outcomes, including time of death. The neural network even includes handwritten notes, comments, and scribbles on old charts to make its predictions. And all of these calculations in record time, of course.
What can we do with this information, besides fear the inevitable? Hospitals could find new ways to prioritize patient care, adjust treatment plans, and catch medical emergencies before they even occur. It could also free up healthcare workers, who would no longer have to manipulate the data into a standardized, legible format.
AI, of course, already has a number of other applications in healthcare. A pair of recently developed algorithms could diagnose lung cancer and heart disease even more accurately than human doctors. Health researchers have also fed retinal images to AI algorithms to determine the chances a patient could develop one (or more) of three major eye diseases.
But those early trials operated on a much smaller scale than what Google is trying to do. More and more of our health data is being uploaded to centralized computer systems, but most of these databases exist independently, spread across various healthcare systems and government agencies.
Funneling all of this personal data into a single predictive model owned by one of the largest private corporations in the world is a solution, but it’s not an appealing one. Electronic health records of millions of patients in the hands of a small number of private companies could quickly allow the likes of Google to exploit health industries, and become a monopoly in healthcare.
Just last week, Alphabet-owned DeepMind Health came under scrutiny by the U.K. government over concerns it was able to “exert excessive monopoly power,” according to TechCrunch. And their relationship was already frayed over allegations that DeepMind Health broke U.K. laws by collecting patient data without proper consent in 2017.
Healthcare professionals are already concerned about the effect that AI will have on medicine once it’s truly embedded, and if we don’t take precautions for transparency before then. The American Medical Association admits in a statement that combining AI with human clinicians can bring significant benefits, but states that AI tools must “strive to meet several key criteria, including being transparent, standards-based, and free from bias.” The Health Insurance Portability and Accountability Act (HIPAA) passed by Congress in 1996 — 22 years is an eternity in technology terms — just won’t cut it.
Without a effective regulatory framework that encourages transparency in the U.S. it will be near impossible to hold these companies accountable. It may be up to private companies to ensure that AI technology will have an impact on healthcare that benefits patients, not just the companies themselves.