Artificial intelligence (AI) has become a fixture in healthcare revenue cycle management (RCM). In this area, finance leaders are desperate for ways to relieve understaffed departments struggling under unprecedented volumes of payer audit demands and rising denial rates without sacrificing accuracy or precision.
At a time when RCM staffing shortages are high, AI provides a critical productivity boost. By investing in data, AI, and technology platforms, compliance and revenue integrity departments have reduced their necessary team size by a third while performing 10% more in audit activities, compared to 2022, according to the 2023 Benchmark Report. In 2024, this productivity boost has further increased to 35% where teams can do more with less with AI.
Here is where AI shines. Arguably its greatest asset is assisting in uncovering outliers and needles in the haystack across millions of data points.
Unfulfilled Promises
While AI has enabled the automation of many RCM tasks, however, the promise of fully autonomous systems remains unfulfilled. This is partially due to software vendors’ focus on technology without first taking the time to fully understand the targeted workflows and the human touchpoints within them. It’s a practice that leads to ineffective AI integration and end-user adoption.
For AI to function appropriately in a complex RCM environment, humans must be in the loop. Human intervention helps overcome deficits in accuracy and precision – the toughest challenges with autonomous AI – and enhances outcomes, helping avoid the repercussions of poorly designed solutions.
Financial impacts are the most obvious repercussion for healthcare organizations. Poorly trained AI tools used to conduct prospective claim audits might miss instances of undercoding, which means missed revenue opportunities. For one MDaudit customer, an incorrect rule within their “autonomous” coding system was improperly coding drug units administered, resulting in $25 million in lost revenues. The error would never have been caught and corrected if not for a human in the loop uncovering the flaw.
AI can also fall short by overcoding results with false positives, an area under specific scrutiny due to the government’s mission of fighting fraud, abuse, and waste in the healthcare system.
Retaining Humans in the Loop
Again, keeping humans in the loop is the best strategy for preventing these types of negative outcomes. Three specific areas of AI will always require human involvement to achieve optimal outcomes.
Read about these three areas in the full article, published by Beckers Health IT.