Health care bots are only as good as the data and doctors they learn from | Tech News

The number of tech companies pursuing health care seems to have reached an all-time high: Google, Amazon, Apple, and IBM’s Watson all want to change health care using artificial intelligence. IBM has even rebranded its health offering as “Watson Health — Cognitive Healthcare Solutions.” Although technologies from these giants show great promise, the question of whether effective health care AI already exists or whether it is still a dream remains.

As a physician, I believe that in order to understand what is artificially intelligent in health care, you have to first define what it means to be intelligent in health care. Consider the Turing test, a point when a machine becomes indistinguishable from a human.

Joshua Batson, a writer for Wired magazine, has mused whether there is an alternative measurement to the Turing test, one where the machine doesn’t just seem like a person, but an intelligent person. Think of it this way: If you were to ask a random person about symptoms you experience, they’d likely reply “I have no idea. You should ask your doctor.” A bot supplying that response would certainly be indistinguishable from a human — but we expect a little more than that.

The challenge of health care AI

Health is hard, and that makes AI in health care especially hard. Interpretation, empathy, and knowledge all have unique challenges in health care AI.

To date, interpretation is where much of the technology investment has gone. Whether for touchscreen or voice recognition, natural language processing (NLP) has seen enormous investment including Amazon’s Comprehend, IBM’s Natural Language Understanding, and Google Cloud Natural Language. But even though there are plenty of health-specific interpretation challenges, interpretation challenges are really no greater in this particular sector than in other domains.

Similarly, while empathy needs to be particularly appropriate for the emotionally charged field of health care, bots are equally challenged trying to strike just the right tone for retail customer service, legal services, or childcare advice.

That leaves knowledge. The knowledge needed to be a successful conversational bot is where health care diverges greatly from other fields. We can divide that knowledge into two major categories: What do you know about the individual? And what do you know about medicine in general that will be most useful their individual case?

If a person is a diabetic and has high cholesterol, for example, then we know from existing data that the risks of having a heart attack are higher for that person and that aggressive blood sugar and diet control are effective in significantly lowering that risk. That combines with a general knowledge of medicine which says that multiple randomized controlled trials have found diabetics with uncontrolled blood sugars and high cholesterol to be twice as likely as others to have a cardiac event.

What is good enough?

There are two approaches to creating an algorithm that delivers a customized message. Humans can create it based on their domain knowledge, or computers can derive the algorithm based on patterns observed in data — i.e., machine learning. With a perfect profile and perfect domain knowledge, humans or machines could create the perfect algorithm. Combined with good interpretation and empathy you would have the ideal, artificially intelligent conversation. In other words, you’d have created the perfect doctor.

The problem comes when the profile or domain knowledge is less than perfect (which it always is), and then trying to determine when it is “good enough.”

The answer to “When is that knowledge good enough?” really comes down to the strength of your profile knowledge and the strength of your domain knowledge. While you can make up a shortfall in one with the other, inevitably, you’re left with something very human: a judgment call on when the profile and domain knowledge is sufficient.

Lucky for us, rich and structured health data is more prevalent than ever before, but making that data actionable takes a lot of informatics and computationally intensive processes that few companies are prepared for. As a result, many companies have turned to deriving that information through pattern analysis or machine learning. And where you have key gaps in your knowledge — like environmental data — you can simply ask the patient.

Companies looking for new “conversational AI” are filling these gaps in health care, beyond Alexa and Siri. Conversational AI can take our health care experience from a traditional, episodic one to a more insightful, collaborative, and continuous one. For example, conversational AI can build out consumer profiles from native clinical and consumer data to answer difficult questions very quickly, like “Is this person on heart medication?” or “Does this person have any medications that could complicate their condition?”

Not until recently has the technology been able to touch this in-depth and profile on-the-fly. It’s become that perfect doctor, knowing not only everything about your health history, but knowing how all of that connects to combinations of characteristics. Now, organizations are beginning to use that profile knowledge to derive engagement points to better characterize some of the “softer” attributes of an individual, like self-esteem, literacy, or other factors that will dictate their level of engagement.

Think about all of the knowledge that medical professionals have derived from centuries of research. In 2016 alone, Research America estimated, the U.S. spent $171.8 billion on medical research. But how do we capture all of that knowledge, and how could we use it in conversational systems? This lack of standardization is why we’ve developed so many rules-based or expert systems over the years.

It’s also why there’s a lot of new investment in deriving domain knowledge from large data sets. Google’s DeepMind partnership with the U.K.’s National Health Service is a great example. By combining their rich data on diagnoses, outcomes, medications, test results, and other information, Google’s DeepMind can use AI to derive patterns that will help it predict an individual’s outcome. But do we have to wait upon large, prospective data analyses to derive medical knowledge, or can we start with what we know today?

Putting data points to work

Expert-defined vs. machine-defined knowledge will have to be balanced in the near term. We must start with the structured data that is available, then ask what we don’t know so that we can derive additional knowledge from observed patterns. Domain knowledge should start with expert consensus in order to derive additional knowledge from observed patterns.

Knowing one particular data point about an individual can make the biggest difference in being able to read their situation. That’s when you’ll start getting questions that may make no sense whatsoever, but will make all the sense in the world to the machine. Imagine a conversation like this:

BOT: I noticed you were in Charlotte last week. By any chance, did you happen to eat at Larry’s Restaurant on 5th Street?

USER: Uh, yes, I did actually.

BOT: Well, that could explain your stomach problems. There has been a Salmonella outbreak reported from that location. I’ve ordered Amoxicillin and it should be to you shortly. Make sure to take it for the full 10 days. The drug Cipro is normally the first line therapy, but it would potentially interact badly with your Glyburide. I’ll check back in daily to see how you’re doing.

But while we wait for the detection of patterns by machines, the knowledge that is already out there should not be overlooked, even if it takes a lot of informatics and computations. I’d like to think the perfect AI doctor is just around the corner. But my guess is that those who take a “good enough” approach today will be the ones who get there first. After all, for so many people who don’t have access to adequate care today, and for all that we’re spending on health care, we don’t yet have a health care system that is “good enough.”

Dr. Phil Marshall is the cofounder and chief product officer at Conversa Health, a conversation platform for the health care sector.

You might also like

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. AcceptRead More