Alphabet’s AI Might Be Able to Predict Kidney Disease

Google has a solution for the creaking inefficiencies of modern healthcare: push notifications. No, not those annoying reminders to practice your Arabic lesson on Duolingo or subscribe to a new Lyft deal. Google is betting its alerts can save your life. The company is building an artificial-intelligence-driven system that promises to give doctors an early warning of dangerous medical conditions arise, part of its ongoing efforts to break into healthcare.

On Wednesday, Alphabet’s artificial intelligence lab DeepMind showed progress toward that kind of disease prediction, starting with a condition called acute kidney injury. Using software developed with the Department of Veterans Affairs, researchers were able to predict the condition in patients up to 48 hours before it occurred. The machine learning software was trained using medical records from more than 700,000 VA patients, and could anticipate 90 percent of cases where the damage was severe enough that a patient required dialysis.

The results, published in the journal Nature, suggest doctors could one day get early warnings in time to prevent some patients suffering kidney damage, says Eric Topol, a professor at Scripps Research who wasn’t involved in the research. “This is remarkable work,” he says. “You could potentially mitigate the need for dialysis or kidney transplant, or prevent a patient’s death.” More than half of adults admitted to an ICU end up with acute kidney injury, which can be lethal. But if detected early, the condition is often easy to treat or prevent by increasing fluids or removing a risky medication.

Alphabet has a ready-made vehicle to help commercialize its research. Kidney-protecting algorithms would be a perfect upgrade to a mobile app called Streams being tested by DeepMind in some British hospitals, Topol says. On Wednesday, DeepMind and its collaborators separately published results showing that using Streams, doctors missed only 3 percent of cases of kidney deterioration, compared with 12 percent missed without the app.

That version of Streams doesn’t use DeepMind’s specialty, machine learning; it alerts staff based on results from a single blood test. But the plan is to merge the two threads of research. Using Streams, physicians could be alerted to predictions of acute kidney injury, says Dominic King, a former surgeon who leads DeepMind’s health effort—and eventually other conditions as well, like sepsis or pancreatitis. “We want to move care from reactive firefighting, which is how you spend most of your life as a physician, to proactive and preventive care,” he says.

That kind of shift is difficult in a hospital setting, with its entrenched rules and warrenous chains of command. DeepMind has previously recognized that any AI software it designs for health care needs to integrate with existing hospital workflows. Hence its decision to first test an AI-free version of Streams in hospitals before adding any predictive capabilities.

“This is remarkable work.”

Eric Topol, Scripps Research

One potential challenge is notification fatigue. An inevitable side effect of making predictions is false positives—the algorithm sees signs of a disease that never develops. Even if that sparked unnecessary care, says DeepMind researcher Nenad Tomasev, the algorithm would still on balance likely save medical staff time and money by avoiding serious complications and interventions like dialysis. The question, though, is how to account for human behavior. False positives increase the risk that alerts become annoying and eventually are ignored.

Topol of Scripps notes that while the algorithm performed well on historical data from the VA, DeepMind needs to validate that it truly predicts kidney disease in patients. Such studies are more complex, lengthy, and expensive than testing an idea using a pile of existing data, and Topol says few have been done for medical applications of AI. When they have, such as in trials of software that reads retinal images, their performance has been less impressive than in studies using past data.

Another potential hurdle: The algorithm relies heavily on localized demographic data to make its predictions, meaning the system developed for the VA won’t generate good predictions for other hospitals. Even in the study, the algorithm was less accurate at predicting kidney deterioration in women, because they represented only 6 percent of the patients in the dataset.

Alphabet has launched numerous experiments in healthcare, though it doesn’t have much to show for it in its financial results—more than 80 percent of the company’s revenue still comes from ad clicks. An effort to offer electronic medical records was shut down in 2011. More recently the company has spun up experiments using AI to read medical images, and is testing software in India that screens for eye problems caused by diabetes. Alphabet’s Verily arm has focused on ambitious projects like nanoparticles that deliver drugs and smart contact lenses.

Two job ads posted by Google this month underline its commitment to its health division and the challenges the new effort faces. One seeks a head of marketing to create a “brand identity” for Google Health. The other asks for an experienced executive to lead work on deploying Google’s health technology in the US. The ad notes that Google has been “exploring applications in health for more than a decade.”

Alphabet’s predilection for big data could prove an advantage in healthcare. (People type around 1 billion health-related queries into Google’s search engine each day, Google Health VP David Feinberg said at the SXSW conference in Austin this year.) But it also brings challenges. The company has vast and lightly regulated stocks of information on online behavior. For health projects, it must negotiate access to medical records by finding partners in health care, as it did with the VA, whose use of data is bound by strict privacy rules.

Alphabet’s health experiments have already run into regulatory and legal troubles. In 2017 the UK data regulator said one of DeepMind’s hospital collaborators had breached the law by giving the company patient data without patient consent, and access to more information than was justified. That background caused alarm in some privacy experts when Google said in November that it would absorb the Streams project from DeepMind, as part of an effort to unify its health care projects under new hire David Feinberg, previously CEO of Pennsylvania health system Geisinger. Google acquired DeepMind in 2014.

In June, a Chicago man filed a lawsuit against Google, the University of Chicago, and the University of Chicago Medical Center, alleging that personal data was not properly protected in a project using data analysis to predict future health problems. Google and the medical center have said they followed applicable best practices and regulations.


More Great WIRED Stories

Read More