Scroll Top

3 Critical Roles of Humans in AI-Enhanced Healthcare RCM

Sep 12, 2023 5 minute read

Artificial intelligence (AI) is quickly becoming a fixture in healthcare as more use cases emerge on both the administrative and clinical sides of the house. For healthcare finance leaders, integrating AI into revenue cycle management (RCM) workflows relieves overburdened, understaffed departments facing unprecedented volumes of third-party audit demands while battling rising denials and worsening labor shortages.

AI’s greatest ability is uncovering outliers and needles in the haystack across millions of data points, which brings competitive advantages to the RCM function in driving tangible outcomes. Leaders who dismiss AI as hype will leave their organizations behind as they get labeled as technological laggards.

Yet truly autonomous AI in healthcare is a pipe dream despite being heavily marketed.

AI has enabled the automation of many RCM tasks; however, the promise of fully autonomous systems remains unfulfilled. This is partly due to software vendors’ propensity to focus on technology without first taking the time to fully understand the targeted workflows and, importantly, the human touchpoints within them – a practice that leads to ineffective AI integration and end-user adoption.

But it is also because humans must always be in the loop to ensure that AI functions appropriately in the complex RCM environment. While the stakes may not be as high on the RCM side as on the clinical side – i.e., patient lives are not at risk if a hospital’s claim scrubber misses an incorrect CPT code – the repercussions of poorly designed AI solutions are nonetheless significant. Recent media reports have shown how AI can be misused and exploited to cause harm to patients by burdening them with huge financial liabilities.

For healthcare organizations, financial impacts are the most obvious. For example, poorly trained AI tools being used to conduct prospective claims audits might miss instances of undercoding, which, in turn, means missed revenue opportunities. One customer shared that an incorrect rule within their autonomous coding system incorrectly coded the drug units administered, resulting in $25 million in lost revenues. The error would never have been caught and corrected without a human in the loop uncovering the flaw. Another example would be missed instances of overcoding, whether intentional or accidental, putting the provider at risk of repayment and penalties.

Poorly designed AI can also impact individuals. Consider the implications if an AI tool is not properly trained on the “at-risk provider” concept in the revenue cycle sense. Physicians could find themselves unfairly targeted for additional scrutiny and training if they are included in sweeps for at-risk providers with, for example, high denial rates. It wastes time that should be spent seeing patients, slows cash flow by delaying claims for prospective reviews, and could harm their reputation by slapping them with the “problematic” label.

Preventing these types of negative outcomes requires humans in the loop.

3 Aspects Requiring Humans in the Loop

1. Building a strong data foundation

Building a robust data foundation is a critical first step. The underlying data model with proper metadata, data quality, and governance is key to enabling peak efficiency in AI. For this to happen, developers must get into the trenches with billing compliance, coding and revenue cycle leaders and staff to fully understand their workflows and data needed to perform their duties.

For example, effective anomaly detection requires not only billing, denials, and other claims data but also an understanding of the complex interplay between providers, coders, billers, payers, etc., to ensure the technology is capable of continuously assessing risks in real-time and delivering to users the information needed to focus their actions and activities that drive measurable outcomes. If organizations skip the data foundation and accelerate the deployment of their AI models using shiny tools, it will result in hallucinations and false positives from the AI models that will cause noise and hinder adoption.

2. Continuously training the AI technology

Healthcare RCM is a continuously evolving profession that necessitates ongoing education to ensure its professionals understand the latest regulations, trends, and priorities. The same is true of AI-enabled RCM tools. Reinforcement learning allows AI to expand its knowledge base and increase its accuracy. User input is critical to refinement and updates to ensure AI tools meet current and future needs.

AI should be trainable in real-time, allowing end users to immediately provide input and feedback on the results of information searches and/or analysis to support continuous learning. It should also be possible for users to mark data as unsafe when warranted to prevent its amplification at scale. For example, attributing financial loss or compliance risk to specific entities or individuals without explaining why it’s appropriate.

3. Properly governing AI

Humans must validate AI’s output to ensure it is safe. For example, even with autonomous coding, a coding professional must ensure AI has properly “learned” how to apply updated code sets or deal with new regulatory requirements. When humans are excluded from the governance loop, a healthcare organization leaves itself wide open to revenue leakage, negative audit outcomes, reputational loss, and so much more.

There is no question that AI can transform healthcare, especially RCM. However, doing so requires healthcare organizations to augment their technology investments with human and workforce training to optimize accuracy, productivity, and business value.

Subscribe to the MDaudit blog

Related Blog Posts