This piece is excerpted from Separating AI fact from fiction: A guide for healthcare leaders. You can read the full ebook here.

If what’s written about in popular media were to be believed, there’s nothing AI can’t quickly and easily automate. In reality, that’s not exactly the case – especially when it comes to sensitive, highly regulated areas like healthcare that demand high accuracy.

Here’s a good way to think about it: If you approached a random person on the street, gave them an internet-enabled device, and asked them to create a list of 10 US coffee shops with witty names, they’d probably be able to complete that task. The stakes are low, and the answers are pretty easy for anyone to find on the web. But if you asked them whether someone with a specific high-deductible insurance plan would require a prior authorization before receiving a brand-name asthma medication, they’d get stuck.

AI is a bit like that person on the street. It’s extremely impressive at solving problems for which the answers are easily available and straightforward. When the subject matter or task at hand becomes more complex, it’s that much harder to build an algorithm that can work with it. Further, when you consider the critical nature of accuracy and privacy (meaning it’s particularly unforgiving with mistakes) when dealing with someone’s health versus someone’s coffee-shop-naming brainstorm, you can start to see just why AI for healthcare is so complicated. 

Of course, that doesn’t mean AI can’t be used to solve some of the healthcare industry’s biggest challenges (you wouldn’t be reading this if that were the case). AI is good at automating routine, repeatable processes, and there are certainly plenty of those in healthcare operations. It’s just critical to ensure you’re working with an AI solution that has the right domain expertise and appropriate guardrails in place – and that you understand how to evaluate whether a solution can back up their claims. 

Relatedly, if the idea of AI creating new healthcare-related content or ideas based on existing data – as generative AI does – gives you pause, you’re not alone. 

A lot of the media hype around AI today surrounds easier models with lower stakes – like our coffee shop example from above. When LLMs fail, they can “hallucinate” or make up facts and details that are not true, while seeming completely confident about these responses. In the healthcare space, that’s not something we can afford. 

Hallucinations occur because LLMs are trained to continue patterns and generate plausible sounding text, not necessarily provide 100% factual information. If you’re like most people and have experimented with generative AI-powered chatbots, you’ve probably experienced this, and it is a big part of why Infinitus has human guardrails in place. We keep human reviewers in the loop to ensure the quality of our digital assistant’s outputs.

When using AI in healthcare, guardrails at every step in the pipeline are crucial. The job of an AI company is to evaluate all models and tools available to them, and employ those technologies in a manner that’s representative, fair, and produces accurate results.

Any solution you evaluate needs to be able to say the same.

If you’re interested in learning more, and understanding how to cut through the hype to accurately assess AI solutions, you can read Separating AI fact from fiction: A guide for healthcare leaders here.