What CDI and coding experts teach AI about clinical context
April 14, 2026 | Utkarsh Tripathi and Thomas Polzin
Read time: 6 mins
It’s 11:35 a.m. in a busy hospital system.
A clinical documentation integrity (CDI) specialist and a coding specialist are staring at the same chart — and the chart is doing what charts do best: telling five versions of the story at once.
The emergency department documents “AMS.”
The hospitalist writes “confusion likely due to UTI.”
Neurology says “delirium.”
Nursing notes “patient pulling at lines.”
The medication administration record (MAR) shows opioids and benzodiazepines.
Labs reveal a sodium swing.
The CT head is negative.
Somewhere in the middle of all that noise sits a deceptively simple question with major downstream impact:
Do we actually have encephalopathy documented — and if so, what kind?
For CDI and coding experts, this is where experience quietly shows up.
The CDI specialist is already thinking:
Do we have clinical indicators, treatment, and clear provider linkage?
The coding specialist is thinking:
Do we have provider stated diagnosis language that’s specific, consistent and reportable across the encounter?
Different lenses. Same invisible work.
They’re both building context before taking the next step.
From chart review to context engineering
In agentic AI, that invisible work has a name: context engineering.
Context engineering is the practice of deciding what an AI agent should see — and just as importantly, what it should not see — at a given moment so it can act reliably. In healthcare, that action might be:
- Drafting a documentation clarification query
- Flagging conflicting diagnoses
- Producing a coding ready clinical summary
- Identifying where evidence supports — or doesn’t support — a diagnosis
A key technical concept here is the context window. AI models can only attend to a limited amount of information at once. If you dump the entire chart into that window, the agent doesn’t become smarter — it often becomes distracted, contradictory or overly confident about the wrong detail.
Sound familiar?
CDI and coding professionals already solve this problem every day. You curate a high signal working set: the specific notes, labs, meds and timeline moments that matter for this decision, right now.
That skill translates directly to how AI must be trained to reason in clinical documentation workflows.
Why encephalopathy is the perfect context diagnosis
If there were ever a diagnosis that exposes weak context, it’s encephalopathy.
- Altered mental status is a symptom
- Delirium is often a clinical description
- Encephalopathy is a reportable diagnosis that usually requires acuity and etiology
That puts encephalopathy at a documentation crossroads — where CDI and coding experts routinely ask context questions, not just presence or absence questions:
- What exactly is being described: confusion, delirium encephalopathy — or all three?
- Is it acute, chronic or acute on chronic?
- What is the likely etiology (metabolic, toxic/medication related, infectious, hepatic, hypoxic)?
- Is the provider explicitly linking cause to diagnosis, or is everyone implying it?
This is also where AI agents struggle the most — unless their context is intentionally engineered.
How context engineering mirrors CDI and coding best practices
When engineers and informaticists build reliable AI agents for medical coding and CDI support, they often follow steps that will feel very familiar to you.
1. Hunt for the receipts
Pull only what matters:
- First mention of altered mental status
- Key provider and consultant language
- Mentation trends over time
- Sedating medications and timing
- Labs and imaging that support — or contradict — the diagnosis
2. Stitch a snapshot
Convert chart chaos into clarity:
- A dated, high level mini timeline
- A one line working story: what changed, what was treated, what improved
3. Guardrail the logic
Hard code restraint:
- Don’t equate AMS with encephalopathy
- Don’t infer etiology
- Prefer provider stated diagnoses
- Flag conflicts instead of silently choosing a side
4. Remember with state
Track unresolved questions:
- What’s unclear
- What evidence supports it
- What query was sent
- What was answered
- What’s still pending
This prevents the agent from “looping” tomorrow — just like good CDI follow through prevents the same chart from resurfacing endlessly.
What good AI output actually looks like
When context is engineered well, AI output becomes less “chatbotty” and far more useful — because it behaves like a disciplined assistant, not a decision maker.
Examples:
Coding/CDI summary:
“Documentation describes acute confusion/delirium. Encephalopathy is not consistently documented across the encounter. Potential contributors include medication exposure, electrolyte abnormality, and infection treatment. Consider clarification if clinically appropriate.”
Conflict flag:
“Hospitalist note documents ‘metabolic encephalopathy’; neurology note documents ‘delirium.’ Please confirm whether encephalopathy is diagnosed and, if so, acuity and etiology.”
Notice what the agent does not do:
- It doesn’t invent a diagnosis
- It doesn’t assign etiology
- It doesn’t smooth over uncertainty
That restraint isn’t a limitation — it’s a context engineering win.
AI as assistant, not authority
Even with clean context and strong tooling, AI agents are best viewed as the world’s fastest assistant — excellent at organizing information, drafting summaries and flagging gaps.
But the clinical judgment, compliance accountability and final decisions still belong where they always have:
With CDI and coding experts.
In many ways, AI isn’t teaching healthcare how to think differently. It’s finally being forced to learn how you already think — carefully, contextually and with respect for what the record actually says.
Utkarsh Tripathi, MS, is a senior AI and machine learning engineer at Solventum.
Thomas Polzin is an agentic AI Lead at Solventum.