AI in Healthcare: A Love Story With a Very Expensive Plot Twist

I’m going to say something that might ruffle some feathers in the vendor demo circuit: AI is not your compliance team.

It’s not your coder. It’s not your documentation reviewer. And it is absolutely, unequivocally, not a substitute for understanding what you’re actually billing.

I want to be clear — I’m not anti-AI. I use it. Most of us do at this point. But there’s a version of “AI adoption” happening in healthcare right now that is giving me anxiety. The kind where you watch someone walk toward a cliff and they’re nodding along to a sales pitch the whole way down.

So let’s talk about what’s actually happening — and what’s at stake.


“But Our AI Does the Coding”

Cool. Who’s auditing it?

This is where the conversation usually gets quiet. Organizations are implementing AI-assisted coding tools, AI clinical documentation tools, AI prior auth tools — layering automation on top of automation — and somewhere in the process, the human accountability piece got left in the parking lot.

Here’s the regulatory reality: the False Claims Act (31 U.S.C. §§ 3729–3733) does not have an exception for bots. Knowingly submitting false claims — and “knowingly” includes acting in deliberate ignorance or reckless disregard of the truth — is a liability that sits with the provider organization. Not the software vendor. Not the implementation consultant. You.

So when your AI overcodes 300 E/M visits over six months because it was trained on data that skewed high, and you didn’t audit it because you assumed it was working correctly — that’s not a technology failure. That’s a compliance failure. good at identifying.


The Cloned Note Problem, Now at Scale

CMS has been warning about cloned documentation for years. Copy-forward notes, templated findings, documentation that looks exactly the same across every encounter — it’s been an audit focus through RAC reviews, ZPIC audits, and provider education initiatives basically since EHRs became mainstream.

And then we introduced AI scribes and AI-generated clinical summaries. Tools that pull from previous encounters, suggest language based on pattern matching, and pre-populate assessments based on what was documented last time.

I need you to sit with that for a second.

The clinical note is the legal record that justifies the code. It has to reflect what actually happened in that specific encounter, with that specific patient, on that specific date. If the AI is writing notes that are essentially templated from historical patterns — and the physician is clicking “approve” without a real review — you don’t have individualized documentation. You have a liability factory running on autopilot.

For E/M services especially, medical decision-making complexity has to be specific to the encounter. That’s not a suggestion. That’s the 2021 AMA guidelines that CMS adopted. Your AI doesn’t know that Mrs. Johnson presented differently today. It knows what was documented last time.


Your AI Was Trained on Someone Else’s Mistakes

AI coding tools learn from historical data. That’s the whole premise. And historical data in healthcare billing is… complicated.

Some practices have historically undercoded — out of risk aversion, lack of education, or payer intimidation. Some have historically overcoded — intentionally or not. Either way, if your AI tool was trained on that data, it’s going to replicate those patterns. Consistently. Efficiently. At a scale that makes it very easy for CMS to identify when they pull your claims.

Pattern recognition is exactly what auditors do. It’s what the NCCI edits were built on. It’s what aberrant billing flags are built on. An AI that produces consistent, patterned errors across thousands of claims is not a needle in a haystack situation. It’s a red flag that practically waves itself.

Know what your tool was trained on. Ask your vendor. Push for an answer. And then audit the output before you trust it with your revenue — and your license.


Your Compliance Program Needs to Know Your Tech Stack Exists

The OIG updated its Compliance Program Guidance in November 2023. The message was clear: an effective compliance program has to keep pace with how your organization actually operates — including technology adoption.

If you added an AI tool in the last two years and your compliance risk assessment hasn’t been updated to account for it, you have a documented gap. And “we just implemented it” is not a compliance program. It’s a sentence that sounds bad in an investigative interview.

Your compliance program needs to answer: Who validates AI output before claims go out? What does your audit cadence look like for AI-assisted billing? How do you handle errors identified post-submission? What’s the escalation path when the tool is consistently wrong?

These aren’t hypothetical questions. They’re the questions you want to have already answered before someone else asks them on your behalf.


Here’s What I Actually Want You to Do

Use AI. Please — our industry needs efficiency. But use it like the compliance professional or coding expert you are, not like someone who just wants to check a box and move on.

Audit your AI’s output before you trust it with live billing. Pull a statistically meaningful sample. Look at what it’s coding, what it’s missing, and where it’s overcoding. Then keep auditing — quarterly, at minimum.

Make sure your physicians understand that attestation is not a click-through. If they’re signing off on AI-generated notes without actually reading them, you have a documentation integrity problem that no policy language can fix after the fact.

Update your compliance risk assessment to include the tools you’re actually using. If your risk assessment still looks like it was written in 2019, it does not reflect your current risk exposure.

And read your vendor contract carefully. When something goes wrong — and eventually, something will go wrong — your agreement with your software vendor will tell you exactly how alone you are. Know that before you sign.


AI is a tool. Compliance is a discipline. Revenue integrity is the outcome when you apply both with intention.

We’re all figuring this out in real time — but “figuring it out” can’t mean skipping the audit. Not in this industry. Not with these stakes.

Compliantly Yours,

Whitney

Leave a Reply

Scroll to Top

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading