Generative AI ‘not reliable yet’

| August 23, 2023

By Bill Siwicki–That said, by John Halamka, president Mayo Clinic Platform despite its current limitations – it will never replace “empathy, listening, respect, personal preference” – it’s clear artificial intelligence is leading to fundamental changes in care delivery, says the IT innovator.

Artificial intelligence is all anyone can talk about these days. And in healthcare, despite some very real concerns about where it could be headed, it’s actually achieving significant results. To pick just a few use cases, AI is:

Nearly one-third of patients already say they would be comfortable with artificial intelligence leading a primary care appointment, according to a recent survey. You might say, “Well that’s only 32%.” But at this very early stage of AI’s rollout in healthcare, that’s a pretty impressive group that would trust relatively new technology to tend to their physical health.

Can AI be your doctor?

Still, as with any breakthrough technology, there’s a lot of hype. AI is not perfect. It has hallucinations (mistakes). It cannot do everything; it is not a panacea. It cannot replace your doctor. Whoops. Wait a minute.

“If your doctor could be replaced by AI, your doctor should be replaced by AI.” That statement was made a few years ago by Dr. John Halamka, the Mayo Clinic Platform president and former chief information officer who has been one of the preeminent voices in healthcare information technology for decades.

Things have changed quite a bit in just a few years’ time as AI has advanced faster than many expected.

Just this past month, former FDA Commissioner Dr. Scott Gottlieb made headlines nationwide with an op-ed explaining how AI “may take on doctors’ roles sooner rather than later.”

So what does Halamka say today? Healthcare IT News sat down with Halamka to let him explain that original statement and expand upon it today, as well as get answers to many other AI questions.

“What I meant by that was, why do we go to clinicians?” asked Halamka, a medical doctor who holds a master’s degree in medical informatics from Harvard.

“Empathy, listening, respect, personal preference. No matter what generative AI quality and accuracy we get to, it is unlikely those generative AI systems will have empathy. We’ll have respect. We’ll have the kinds of things we want from our humans.

“So in effect, if all you have is a doctor who’s reading a textbook back to you, you probably don’t want that doctor,” he added. “So, what you hope is that clinicians will use generative AI to make them better diagnosticians. That is, they’ll have access to more data, more literature, more time with the patient because they’ll be less burdened with some of the administrative tasks.

“So, we can modify my statement to say, Doctors and nurses who use AI will replace doctors and nurses who don’t,” he cautioned.

‘Careful with the use cases we choose’

Halamka mentioned generative AI, the kind of artificial intelligence behind wildly popular new applications like ChatGPT. This AI holds promise and risks.

“Let’s step back and look at predictive and prescriptive AI,” Halamka explained. “The idea is, ‘Hey, Bill, do you have a disease? Will you have a disease? If you do, what do we do about it?’ Those kinds of AI, you can measure truth, right? A set of inputs. A set of outputs. What actually happened? Did they work? Did they not?

“Generative AI is a very different beast because it’s not deterministic, it’s probabilistic,” he continued. “And really, all it’s doing is predicting the next word in a sentence. Every time you write a prompt, you’re going to get a subtly different answer. So how do you assess quality and accuracy when every time you use it, it’s different?”

Halamka said he could argue that AI of all kinds needs to have transparency. How was it created? It needs to have a sense of consistency.

“That is, every time I use it, it’s going to give a sort of reasonable result and reliability that I actually feel like I can use this for a given context,” he noted. “So, I think where we are with generative AI is it’s not transparent, it’s not consistent, and it’s not reliable yet. So we have to be a little bit careful with the use cases we choose.”

AI at the Mayo Clinic

The Mayo Clinic has been working over the past few years with Google on a landmark AI partnership. In June, the two organizations showcased some of the generative AI use cases they’re working on together.

“I have been in academic healthcare for almost 40 years, and one of the challenges with academic healthcare is that every project we do is ad hoc,” Halamka explained. “That is, you get an idea, the innovator talks to your lawyers, 18 months later the contracts are signed, and work finally begins. It’s a very inefficient process.

“What have we done over these last three years?” he said of the partnership. “Templated the process. So we go from idea to running code in two weeks. And how does that happen? Well, we took the entire corpus of Mayo data – structured, unstructured, -omics, telemetry, images, digital, pathology – deidentified it, moved it to a cloud container, and now it takes almost no time to bring any innovator into that cloud container to work with the data.”

The barriers to innovation are really very small, Halamka stated.

“If you look at what Google has done, it’s given us a set of environments and tools so we could be exceedingly agile,” he explained. “I’ll give you a quick example. We run an accelerator, and we sign up 12 companies and give them access to this data set for product development. And we go from company selection to live in two weeks for 12 companies.

“That’s the kind of thing that templatizing these processes enables you to do – that’s amazing work,” he added.

And Halamka has much more insightful perspective about the promise of AI. See my in-depth interview with him in this HIMSS TV Digital Checkup video.

Category: Uncategorized

Comments are closed.