In the healthcare industry, compliance falls short as an AI strategy. Chief AI officers, CIOs, and CISOs need to prioritize responsible AI usage to minimize potential data breaches that could not only lead to fines and litigation, but also reputational damage.
“It’s really a trust factor,” says Dave Meyer, chief data and AI officer at value-based care platform Reveleer. “[Public healthcare information or] PHI is paramount in healthcare, so we have to treat it responsibly. No one in our organization, including data scientists, has access to anything they don’t need to access. Data access needs to be strictly governed.”
Transparency is also critical because it reduces the risk of relying on what could be a hallucination.
“When we give AI results, or when we go through our data models, we support it with monitoring, evaluation, assessment and treatment (MEAT). So, for example, not only did we find the term, ‘diabetes,’ in a patient’s chart, there’s also an explanation of why we suggested this particular ICD [internal classification of diseases] code,” says Meyer. “That way, when AI provides suggestions, the human still decides whether the suggestion is valid or invalid. We’re not trying to [replace] humans. We’re trying to make their job easier and more accurate.”
AI as a Problem-Solving Tool
While the ability to quickly identify health conditions and find correlations is powerful, it’s considerably less helpful if users must then manually wade through volumes of information, which could be several hundred or more pages, to locate the references. Instead, AI can surface the references quickly, such as by identifying on what pages of a document, or pages within a set of documents, those references can be found.
That sort of use case opens the door to GenAI, however, like in many other industry sectors, GenAI tends to be misunderstood. People who lack a foundational understanding of AI tend to believe that GenAI is the latest and greatest version of single technology called, “AI” versus another AI technique.
“I think people view GenAI as a panacea, and it is not a panacea, especially in the healthcare industry where you cannot just have a black box that says, ‘Here’s the answer, but we’re not going to tell you how we got there,’” says Meyer. “We’re using it for evidence extraction from the chart which we can then double check for hallucinations. We take that evidence and run it through our models.”
However, Reveleer also uses AI for other techniques, such as rules, to pull evidence.
“A lot of people think they can upload a chart and then ask GenAI for the answer. It will give you an answer that looks okay on the surface, but they are not production level, customer trustworthy answers that are in the percentile of accuracy that [is necessary] in the healthcare industry,” says Meyer. “Healthcare is a high stakes industry where you’re trying to drive patient outcomes, and I don’t think that GenAI can be trusted on its own to provide that answer.”
Some Healthcare’s Challenges and How to Address Them
One of healthcare’s biggest challenges is failing to understand that the accuracy of a prediction can, and often does, vary with use cases. Since healthcare organizations need highly sensitive patient information to provide diagnoses and treatment, the confidence level matters greatly.
“Trust is a big factor, so being given a suggestion that is 70% accurate isn’t good enough. The stakes are too high. You have to balance the sensitivity and security of the data with who has access to it,” says Meyer.
Of course, trust must be earned by a vendor, particularly when patient records are involved. In Reveleer’s case, customer trust in its AI capabilities has been earned in a stair-step fashion over time. Specifically, the company began by automatically routing patient charts, then later NLP techniques were added so patient information could be surfaced faster and validated. Now its AI provides automatic pointers to where critical information can be located.
“One of the biggest challenges is getting the data in an organized format that is usable,” says Meyer. “In order to build any AI model, you need to have a large quantity of data, and you need to govern that data appropriately. Managing your data is really the foundation of everything before you start building models. You also need to make sure that you know how to handle the data well.”
In addition to getting the foundational elements right, it’s important to choose the right tool for the right job.
“Data science still is a good method for solving a lot of these problems. Everybody’s trying to jump to GenAI as the solution. Don’t force that if you’re getting good results from data science,” says Meyer. “The same is true for rules-based systems. For example, if you see the word, ‘blood pressure’ and the reading next to it says 120 over 80, you don’t need a GenAI model to pull that out for you. Or, if the data is in a structured format, and you can pull it out without any AI.”
However, don’t overlook the need for a human in the loop when it comes to AI.
“In the healthcare industry, machines need to be partnered with humans, because healthcare is too high stakes for a lack of human oversight. One suggestion may have a better than 90% confidence score while another only has a 50% conference score,” says Meyer. “AI can help you cut through the noise and surface the good stuff quickly, but it’s always going to need the human element. We’re not trying to replace humans; we’re just trying to make them more efficient.”