With the recent influx of data across the healthcare industry, the time to fully implement artificial intelligence and machine learning is now, according to Brandon Sim, the chief technology officer and chief operating officer at ApolloMed, a tech-powered healthcare management company
“Especially because recent CMS regulations are going to force electronic health record companies to share their data in a more reasonably accessible fashion,” Sim said. “Just the sheer amount of data that is being collected – the rise of alternative sources of data like remote patient monitoring and wearable devices – make it a very natural fit for using machine learning and AI methods in healthcare.”
Further implementing these technologies into healthcare has the potential to advance preventive care, but for that to be possible, health systems need to adopt “high-level” AI and ML processes, according to Sim.
WHY THIS MATTERS
What health system executives need to do is be open to automation implementation and figure out how to use it best for their patient population, while tech developers also need to work on building better solutions that will make providers want to use them.
LOW-LEVEL VERSUS HIGH-LEVEL AUTOMATION
Sim views the different types of tasks that can be assisted with automation as a sliding scale. At one end, there are “low-level” tasks and at the other are “high-level” tasks.
On the lower end are the repetitive duties that are currently conducted manually. Workflows such as revenue cycle management, analyzing scans, prior authorization, provider credentialing and other administrative activities fall on this side of the scale.
“All of those are what I classify as automation of routine tasks, which AI can certainly help out in,” Sim said. “Machine learning and AI are essentially pattern-finding recognition tools so when there’s a pattern, it can very well do that and figure it out pretty easily.”
This sort of automation is more prevalent in healthcare because it doesn’t require advanced techniques and there is less risk involved, according to Sim.
“There’s not really the risk of something bad happening,” he said. “If you let a robot conduct a surgery, there’s a lot of risks, but if you let a robot process a claim it’s not a big deal. Certainly, there are lots of low-hanging fruit that are immediately implementable.”
From Sim’s perspective, it’s really a “no-brainer” for health system administrators to automate these lower-level tasks.
On the other end of the spectrum is what Sim calls “high-level” automation, which typically involves assisting in care management or care coordination. This could include analyzing swaths of anonymized patient data to risk stratify and recommend actions for preventive care or AI-led robotic surgery.
“Those need to have providers at the center because, at the end of the day, an algorithm is not going to be able to replace a physician,” Sim said. “It’s going to make a recommendation but for legal, ethical and all sorts of other considerations, the doctor has to be the one who approves and actually makes the final action.”
While this type of automation has higher stakes to it, it also could have more value, according to Sim.
“[The algorithm] needs to be 100% accurate all the time and you need to work with the providers, all sorts of specialists, and it just takes a longer time to develop those tools,” he said. “But at the end of the day, those will provide greater improvements to the patient’s life rather than just savings on administrative processes.”
IMPROVING ALGORITHMS FOR HIGH-LEVEL FUNCTIONS
A common reason for many health systems that choose not to use AI and ML is a lack of trust in the algorithms.
“A lot of times algorithms fall into something we call a ‘black box algorithm’ where you put in a bunch of data, you can’t see anything inside of it, and then it just spits out something that you have no clue how it got there but that’s the answer it gives you,” Sim said.
When there’s no way to see how an algorithm makes its decisions, it can be challenging for providers and patients to have confidence in it.
The idea of trustworthy AI has been top of mind for many leaders in healthcare, and the Consumer Technology Association recently unveiled a new ANSI-accredited standard for judging algorithms’ trustworthiness.
“A big part of building trust in AI and machine learning algorithms is allowing for them to be explainable and transparent,” Sim said.
So instead of simply spitting out a conclusion, Sim says algorithms should also provide some sort of an explanation to how it came to its answer.
As these algorithms become more reliable, they will be readily available for the high-level automation tasks that Sim believes will be essential moving forward.
“There’s still a lot of innovation that can be made and that’s the exciting part,” he said. “I see a move towards a familiarity and comfort with these higher-level care coordination and care assistant-type AI algorithms, especially as we make advances on explainable algorithms.”
Email the writer: [email protected]