Act, Don’t Overreact: The Deliberate Approach Leaders Are Taking with Generative AI
By Kate Gamble—Artificial intelligence is the subject of more conversations, conference panel, and podcasts than perhaps any other topic in healthcare. And while some believe it has reached its saturation point, others are still thirsty, and for good reason.
One of those is Paul Curylo, CISO at Inova Health System, who believes Generative AI (GenAI) presents “a wonderful opportunity to leverage capabilities to create innovative solutions to the problems that we’re facing,” he said. But as organizations race to implement tools like ChatGPT, it has become more critical than ever to ensure that cybersecurity doesn’t get left at the starting line.
“When it comes to the security piece, I think more attention upfront is necessary in order to do it well and to achieve the value that we expect,” he added. During a recent discussion, Curylo and his co-panelists — J.D. Whitlock (CIO, Dayton Children’s Hospital), Ed Higgins (Executive Director, Security & Compliance Solutions and Security Office Leader, Quisitive) and Jimmy Ledbetter (VP, AI Strategy and Services, Quisitive) — talked about the security risks that come with adopting AI, the key considerations when evaluating solutions, and how to select the right partners.
Building a document
At Dayton Children’s, Whitlock’s team is looking to leverage ambient AI in a number of areas, including clinical documentation, which can increase efficiency in the care delivery process.
Inova is using GenAI to improve communication between providers and patients and to enable self-scheduling. The objective, Curylo pointed out, is to “ease friction points for physicians and clinicians,” he said. “We’re looking for a return on investment and looking at whether it makes sense for our community.”
Another promising use case is document generation, according to Ledbetter, nothing that GenAI has “allowed access to unstructured data in a way we’ve never seen before,” he said. “When clinicians are taking notes, they have to go to 3, 4 or 5 different systems to try to build a document. They’re like, ‘I’ve got my diagnosis here and my patient record here. How do I put all of this together?’” The ability to start building pipelines and using natural language to organize information more effectively, Quisitive has found, can save “an immense amount of back-office time.”
Higgins agreed, adding that GenAI can populate answers by leveraging the knowledge base of questions that have already been answered, and picking up on the nuances of how they were addressed. In doing so, it has enabled his team to more quickly and effectively interact with customers.
Cyber considerations
Of course, there is a flipside to all of this, said Whitlock. “You can convince me that you’ve built a cool new tool powered by generative AI. That’s awesome. But I’m still going to have our cybersecurity team make sure you’re doing all the right things before we let you touch our PHI,” he said. The same holds true with any tool, GenAI or not.
Noted Curylo, once sensitive data goes into the model, getting it out is extremely challenging to get it out. “How can you be certain that you’ve got it all? That’s probably the biggest concern I have,” he noted. Another is the ability to trace the integrity of the data, he added. “A lot of the onus is on the input piece, making sure that the data going in is actually the data you want to go in.” If the process of getting information into a model was “a little sloppy,” which can happen, “that’s going to result in some problems down the road.”
That’s where Quisitive can fill a need, said Higgins. “We’re consultants as much as we are technology deployers. We’ve looked at thousands of environments from a security lens to identify where the weaknesses are.” As a result, he believes they’re well-positioned to recommend the right techniques to safeguard data — starting with a long-term roadmap for implementing GenAI safely and effectively.
Below, the panelists shared some key best practices based on their experiences.
- Start with the problem. As with any initiative, it’s imperative to focus first on the ‘why,’ according to Curylo. “We’re not going to just stop everything and do AI. But when we start having a conversation, we get to the underlying reasons” — and the problems that need to be solved.
- Prioritize integration. For Whitlock, selecting vendors that have “the slickest integration” with Epic (Dayton Childrens’ EHR) is paramount. “If you want to do some cool, new thing with AI, it needs to be integrated; it’s the same thing we tell our doctors and nurses — it needs to work inside the clinical workflow.”
- Exercise restraint. A common mistake Ledbetter has seen? “People are trying to throw all of the data at the model. They have an application which is basically a Pandora’s box; it has everything you want,” he said. Instead, he advised leaders to “think about AI on an application-by-application basis.” By using an AI database, his team is able to “subset the data out and load the relevant use cases into that index.” The way, “they don’t have access to everything. We can lock down our databases. We can lock down the applications.”
- Bring in experts. Curylo is a firm believer that unless a health system already has subject matter experts on staff who know the technology well, it’s imperative to identify technology partners. And not just ones who can “creatively solve problems,” but those who will act as devil’s advocate “to understand abuse and subterfuge and how it can harm the organization or our patients,” he said. “That’s particularly important.”
- Avoid a narrow focus. For many organizations, the answer is in hiring consultants. Before selecting one, however, “make sure they cover the gambit and aren’t just the master of one domain,” Ledbetter noted. The gambit includes data, applications, infrastructure, and security — all of which touch AI. “Someone might be very good at the data side but neglect application development or innovation,” which can increase security risks.
- Go outside the box. Another solid piece of advice for choosing a partner? Make sure the strategy extends beyond technology. “Are they going to partner with you on the education side? Are they going to work with you to understand how the LLM works and how to get the data into the model itself? You need to be true partners,” said Ledbetter.
- Ask tough questions. An absolutely critical part of the process, according to Ledbetter, is determining whether in fact it’s a viable use case. “We work with the business users to say, ‘this is what the end result is going to be,’” he said. “I’ve worked on ROI discussions before where it was, ‘this is going to save you 100 hours per month.’ Is that truly worth the investment?” Some projects, on the other hand, could yield a savings of hundreds of thousands of hours per year. It’s important for all parties to understand that the “low-hanging fruit” might not generate much value. “We need to truly partner so we can grow together.”
And when leaders do find the right partner — and the right solution — the panelists recommended providing access to a “sandbox” that enables users to play with AI tools. Not only can it help get them accustomed to new solutions, it can also help prevent shadow AI, according to Ledbetter. “By not allowing them to experiment, they’re going to go to the LLM websites where they could be training the model on their data,” he said. “We’re standing up sandboxes to make sure we’re keeping that in a very clean, contained environment.”
The organizations that have done so have seen impressive results, Ledbetter added. “When we offer things like live-action prototyping, it’s usually standing room only. People want to get in. They want us to demystify what that looks like,” he noted. For Quisitive, “inviting people to the table and being able to show the prototype has helped develop a center of excellence.”
Finally, the panelists urged listeners to be deliberate when evaluating AI solutions and to “be as objective as possible before committing to a course of action that would be difficult, if not impossible, to unwind,” noted Curylo. “We all want to be innovative and creative in finding solutions, but we need to take time to ensure the right fit.”
But don’t take too long, advised Whitlock. “We don’t want to be behind the power curve,” he said, especially with something as promising as ambient AI. In fact, he believes that if health systems still haven’t dipped their toes into the water a year from now, “they’re going to start hearing from their providers,” he said, especially when their colleagues tell them that certain tools are making documentation easier — and patients happier – at the other hospitals where they practice. “These tools are going to become the standard of care for our providers. That’s a forcing function.”
And as that function plays an increasingly significant role in how care is delivered, it will become vital for CIOs and other leaders to ensure the right pieces are in place. “My advice is, don’t overreact to things with generative AI, because all of the same rules apply: people, process, technology, good governance, and good cybersecurity,” Whitlock said. “Don’t’ give up on those core essentials.”
Category: Uncategorized