Artificial intelligence may not be curing cancer, but it is already changing medicine

 

As recently as a year ago, artificial intelligence was still an amorphous concept in medicine. Almost every major hospital was tinkering with it, but hype about algorithms replacing doctors — or curing cancer — was outrunning reality.

Now many hospitals are moving swiftly to incorporate the technology into daily practice, promising to harness patient data to improve certain aspects of care and make medical services cheaper and more efficient.

The uses vary widely — from algorithms that can detect the onset of life-threatening conditions, to virtual assistants that help patients adhere to dietary restrictions, to programs that predict how long patients will stay in the hospital and whether they will come back. Digital health specialists say artificial intelligence is nowhere close to curing the as-yet incurable, but they say it has reached an inflection point where its use is becoming practical and demonstrably beneficial.

“In the next 24 months, you’ll see more change in the use of this technology in clinical practice than you’ve seen in the last 10 years,” said Dr. Todd Stewart, vice president of clinical integrated solutions at Mercy Health, a Catholic hospital system based in St. Louis. “But what will be shocking to many people is that they may not notice it.”

That’s because its use is almost invisible, and the impacts subtle, like a website that suddenly churns information faster and more precisely. But the behind-the-scenes nature of artificial intelligence also raises a thicket of ethical questions that many physicians are grappling with. Is biased data creating algorithms that could lead to unfair treatment? Are AI systems inserting an unseen filter that could change the doctor-patient relationship in unhealthy ways? And, in the worst case, could third parties manipulate algorithms as a way to cheat the regulatory system, or deny needed care?

 

“In the next 24 months, you’ll see more change in the use of this technology in clinical practice than you’ve seen in the last 10 years.”

                                  DR. TODD STEWART, MERCY HEALTH


Those concerns are causing hospitals and doctors to pick their projects carefully. Many are using the technology to improve the efficiency of operations or for clerical or business tasks, such as billing. Those using it to deliver patient care are relying on algorithms to inform decision-making — not directly dictate it.

At Mercy Health, for example, doctors and data scientists are using algorithms to detect the onset of sepsis, a complication of infection that kills about 250,000 people a year in the U.S. That allows caregivers to catch it in time to implement a response.

At Beth Israel Deaconess Medical Center in Boston, hospital officials are preparing to launch a new program that uses Amazon’s machine learning technology to manage its operating room schedule. On its face, that seems like a small change. But consider this: ORs are the most expensive real estate in a hospital, costing about $100 a minute to run. If their use is calibrated more precisely, based on the specific characteristics of the patient and doctor, then the hospital can save time and money on each procedure, and doesn’t need to build more operating rooms to do more procedures.

“With this machine learning technology, we took 15 surgeons and were able to free up 30 percent of their capacity,” said John Halamka, chief information officer at Beth Israel Deaconess. He added that the efficiency gains also help with new reimbursement contracts that reward hospitals for lowering the cost of surgeries.

“What we want is the right care, for the right patient, at the right place and the right time,” Halamka said. “The incentives are completely changed. I want the amounts of OR time to be as small as we possibly can achieve.”

Swift advances drive uptake


The increasing use of artificial intelligence by hospitals is fueled by rapid advances in the technology. While AI has been available for some tasks in medicine since the 1980s, its growth is enabled by cloud computing and the enhanced ability of so-called machine learning systems to comb through vast stores of unstructured information, such as doctors’ notes.

Typically, hospitals have to carefully clean and organize the data so machine learning systems can understand it and deliver meaningful insights. But earlier this week, a new study reported that a program given raw electronic medical record information was able to achieve slightly better results in predicting patient mortality and readmission risk, among other outcomes, than systems that rely on pre-scrubbed data.

“Now it’s possible to take messy data … from different institutions and put it all together in a useable input format, from which you can now make actionable predictions,” said Nigam Shah, an associate professor of bioinformatics at Stanford University who was a co-author of the study.

“This enables prediction and action (by hospitals) in a very grand way, because the grunt work of data exchange and data transformation is now done by a computer,” Shah said.

At Stanford, a team of scientists developed a program to identify patients likely to die within three to 12 months, so the hospital’s palliative care team could tailor services for those patients. It has also developed a program to detect familial hypercholesterolemia, a rare genetic condition that results in a twentyfold increase in risk of coronary artery disease. About 90 percent of the U.S. population with this condition is undiagnosed and not getting medication to treat it, which often results in severe medical problems such as heart attacks.

The ability to detect the condition not only helps patients identify the problem, but also enables doctors to advocate with insurers and drug makers for access to expensive medicines to treat them.

“If you are the insurer, and the seven cases we found are going to stay with you for five years, and you spend $84,000 treating those people, you will prevent two to three heart attacks,” Shah said. “So you spent a half a million in total drug expenses, but you prevent the heart attacks, which cost way more.”

A cautious approach


Despite the promise of artificial intelligence in medicine, doctors and hospitals are proceeding with caution.

Specialists who follow the interaction between technology and medicine closely say there is a fundamental tension between the two fields. Those in technology tend to act quickly on their discoveries, commercializing new products and disrupting industries at the earliest opportunity. But doctors tend to adopt technology at a much slower pace, because they must build a scientific body of evidence to prove that a new product is safe and effective for patients.

“It takes a very long time to change standard of care,” said Jim Golden, a managing director at the consulting and auditing firm PwC who tracks the use of artificial intelligence in medicine. Even in cases where physicians achieve groundbreaking results using a particular device, drug, or method, it often takes years for it to be adopted by physicians across the U.S.

“There is a trust factor,” Golden said. “They have to know it works and it has to be clinically reviewed. You can’t just come in as a consultant and say, ‘tah-dah.’ It doesn’t work that way. It’s got to be proven, and it’s got to come from a trusted source.”

So far, the body of evidence on artificial intelligence is exceedingly small relative to research underpinning the development of new drugs and devices. What’s more, the scientific infrastructure to test the impact of algorithms is still under development, making the generation of evidence more difficult, and the adoption of the technology slower.

Shah and his colleagues are proposing a new process for verifying the efficacy of artificial intelligence-based treatment at Stanford. The idea is to increase physician training on what AI can and cannot accomplish, and test the practical, financial, and ethical challenges associated with incorporating them into practice.

“In the same way we take bench to bedside for a new molecule somebody invented, can we not do keyboard to bedside for a new algorithm that we built?” he asked.

The ethical questions associated with the expanded use of artificial intelligence are considerable. If algorithms are trained based on bad data, or data that lacks diversity, its conclusions can become inaccurate or biased. In addition, if existing medical evidence indicates that patients with a particular condition typically die in a defined time frame, and an AI-based technology references that data to deliver a conclusion, then new treatments — and funding for those treatments — could be inappropriately denied.

There is also a concern that an algorithm can form a barrier between doctors and patients, so that a conclusion generated by a computer is having an invisible impact on their relationship, and decision-making, in ways that neither party fully comprehends. It also opens the possibility that third-parties may design algorithms with the specific intent to circumvent regulatory standards, or deny coverage of care that patients need.

The antidote to all of this, Shah said, is to talk about it and disclose more information about the way algorithms are designed, trained, and put into use.

“We want to talk about these issues early on so that it takes away the fear-mongering that can happen,” Shah said, adding that there is no reason to turn the use of artificial intelligence in medicine into another Y2K.

“Everybody panicked about it and nothing happened,” he said. “We don’t want to rush in to make a policy or ethical framework before experimenting a little bit in a controlled environment.”

Bending the arc toward innovation


Right now, that experimentation is unfolding rapidly, and the demonstrated benefits appear to be pushing hospitals toward adoption. Golden said increased uptake is occurring in areas related to imaging and electronic medical records.

In imaging, he said, developers of AI have demonstrated technology that surpasses human capabilities in detecting disease, or the absence of disease, based on an ability to instantly cross-reference huge volumes of information.

In electronic medical records, many hospitals are not only fueled by a desire to mine the data for clinical insights, but to relieve physicians and nurses of crushing clerical burdens that are leading to burnout. There is also the related problem of information overload where it is becoming harder to stay apace with new discoveries and shifting standards.

“The amount of data and new protocols coming into hospitals are accelerating, both in number and in velocity,” Golden said. “We’ve talked to oncologists who say, ‘We get nine cancer protocols a week. There is no way we keep up with them. Can an AI system help us by saying, of the 300 patients we are treating in clinic, here are the 10 that might respond to the protocol published last week?’”

While AI systems are not always so facile, the technology is quickly approaching that level of capability, Golden said, and hospitals and doctors are more interested than ever before.

“We finally have enough data and computing power on which we can build systems to ask and answer really interesting questions,” he said. “I think things are going to happen fast. But how we bring the physician into the design and testing of these systems is going to be really important.”