Entity

Clinical Documentation Query

The CDI specialist's request to a physician for documentation clarification including the specific question, clinical indicators, and physician response.

Last updated: February 2026Data current as of: February 2026

Why This Object Matters for AI

AI CDI systems require query history to learn effective questioning patterns; without structured queries, AI cannot improve documentation specificity recommendations.

Health Information Management & Medical Records Capacity Profile

Typical CMC levels for health information management & medical records in Healthcare organizations.

Formality
L4
Capture
L3
Structure
L3
Accessibility
L2
Maintenance
L2
Integration
L2

CMC Dimension Scenarios

What each CMC level looks like specifically for Clinical Documentation Query. Baseline level is highlighted.

L0

Clinical documentation queries are not formally tracked. When a CDI specialist has a question about a physician's documentation, they walk to the physician's office or leave a sticky note on the chart. There is no formal record of what was asked, whether the physician responded, or how the response affected coding and reimbursement.

None — AI cannot analyze CDI query patterns, physician response rates, or documentation improvement opportunities because no formal query records exist.

Create formal CDI query records — document every query with the patient encounter, the specific clinical question, the clinical indicators that prompted the query, the queried physician, and the query date.

L1

CDI queries are logged in a spreadsheet or basic tracking system with the patient name, query date, and a free-text description of the question. But the clinical indicators, the specific documentation gap, and the physician's response are not captured consistently. Some queries are logged; others happen informally and are never recorded.

AI could count query volumes and track which physicians receive the most queries, but cannot analyze query effectiveness or documentation improvement impact because the clinical context and outcomes are not formally recorded.

Standardize CDI query documentation — require every query to include the specific clinical indicator (lab value, medication, clinical finding), the documentation gap identified, the query type (clarification, specificity, present on admission), and structured fields for the physician response.

L2

CDI queries follow a standardized format with required fields: clinical indicator, documentation gap identified, query type, queried physician, and response tracking. The CDI team can report on query volumes, response rates, and agreement rates by physician and query type. But queries are documented in isolation — they are not linked to the specific chart documentation, coding impact, or reimbursement change they influenced.

AI can generate CDI productivity reports — query volumes, response rates, and agreement rates. Can identify physicians who frequently need queries. Cannot measure the actual clinical documentation or financial impact of queries because the links to coding changes and reimbursement are not captured.

Link CDI queries to documentation and coding outcomes — connect each query to the specific chart section that was amended, the coding change that resulted, and the DRG/reimbursement impact of the physician's response.

L3

CDI queries are linked to clinical documentation and coding outcomes. Each query connects to the chart documentation that prompted it, the physician's response, the resulting documentation amendment, the coding change (if any), and the DRG/reimbursement impact. A CDI manager can query 'show me all queries that resulted in a DRG change with a reimbursement impact over $5,000' and get traceable results.

AI can measure CDI program ROI — tracking financial impact per query, identifying high-yield query opportunities, and recommending where CDI review should focus based on historical documentation-coding-reimbursement linkages.

Implement formal CDI query schemas with entity relationships — model each query as a structured entity with typed relationships to clinical findings, documentation standards, coding guidelines, physician education needs, and quality measure impacts.

L4Current Baseline

CDI queries are schema-driven with full entity relationships. Each query links to the clinical finding that triggered it, the documentation standard not met, the coding guideline at stake, the physician's documentation history, and the quality measure implications. An AI agent can evaluate whether a query is warranted by traversing the complete clinical-documentation-coding relationship chain.

AI can generate CDI queries autonomously for routine documentation gaps — identifying clinical indicators that warrant queries, drafting compliant query language, and routing to the appropriate physician. Complex clinical judgment queries are flagged for human CDI specialists.

Implement real-time CDI query streaming — publish every query creation, physician response, and documentation amendment as a real-time event, enabling continuous CDI program monitoring and physician feedback.

L5

CDI queries generate in real-time from clinical documentation analysis. As physicians document encounters, the system identifies documentation gaps in real-time and generates queries before the chart is completed. The CDI query process is a continuous, real-time feedback loop between clinical documentation and coding optimization.

Can autonomously manage the CDI query lifecycle — identifying documentation opportunities in real-time, generating compliant queries, tracking responses, and measuring impact as a continuous documentation intelligence engine.

Ceiling of the CMC framework for this dimension.

Capabilities That Depend on Clinical Documentation Query

Other Objects in Health Information Management & Medical Records

Related business objects in the same function area.

What Can Your Organization Deploy?

Enter your context profile or request an assessment to see which capabilities your infrastructure supports.