Written by Nicole Benson, BA, CPC, CPMA, COSC
While coders, auditors, and providers no longer need to fret over the tedious history and exam counting of yesteryears for outpatient evaluation and management leveling, the data complexity under the 2021 guidelines carries its own confusion with counting and interpreting how data qualifies across documentation. In our prior April Blog, the AMA’s March 9th technical correction publication was reviewed. The full article can be found at the AMA website. Here we revisit some of the clarifications in relation to the amount and/or complexity of data to be analyzed, while discussing in greater detail how this translates in documentation and selecting a data complexity level as one of the three columns to be considered.
Level two E/M services require minimal to no data, while levels three and four only need to meet one the bulleted criteria listed. This requirement may often be satisfied by reviewing or ordering the specified number of tests, low complexity as two unique tests and moderate complexity as three unique tests; and if the testing is not being performed and billed by the provider or practice that is qualifying said data. Billing the professional component of a test, cannot be given additional E/M complexity credit, as this would be considered double dipping into reimbursement. Level five services MUST meet two out of the three bulleted areas, when reporting with data complexity as one of the two columns considered. In several recent audits performed, we noted some confusion over the two of three requirements for the level five E/M service. Even if a provider is reviewing and/or ordering numerous tests, for example six unique labs, additional credit is not given beyond the three specified. Either an independent interpretation often over imaging performed by another provider, OR a discussion with an external source needs to be met to fully qualify under high data complexity. Keeping in mind the documentation must support it, independent interpretation should be clarified in the note over reviewing a report, which qualifies as a unique test.
Documentation should always provide a clear intent of service or chief complaint, while telling the complete story for that encounter, including review over data. If external notes are being reviewed, where are they from in terms of the unique source(s), what are they, and why are they pertinent to the current patient encounter? The history of present illness is a great area to capture this information, and it might not receive data credit if those questions aren’t clearly answered in the note. Remember that each unique source can receive credit, while multiple notes or records under one unique source is credited only once. For example, a patient presents at their family practitioner for an emergency department follow-up, the provider reviews the ED records and an operative report performed prior to the ED visit from the patient’s surgeon. This would qualify as two unique sources of data.
When ordering tests at an encounter, the assumption is that they will be reviewed or analyzed when the results return. Data credit cannot be given at the next encounter for review of the previously ordered testing, such as lab or x-ray results, regardless of the amount of data often integrated into current encounter notes. If a test is ordered between encounters, this can be counted as reviewed at the encounter that it is discussed with the patient. Yet this is often difficult to discern without digging through previous records. A quick clarification, such as ‘the patient is here to review CBC results that were ordered prior to the visit based on fatigue for the last week’, helps to greater support the data review credit. When a patient has returned for a six-month follow-up and labs are reviewed that were ordered on the last encounter, credit cannot be given again or separately.
Credit is given to each unique test as represented by a single CPT code, i.e., 80053 comprehensive metabolic panel and 85025 CBC with differential would be two unique tests when ordered at an encounter. However, integrating or populating multiple tests into the current encounter, without an actual indication or review over the pertinence of those tests, should not qualify in data complexity. Even when reports are placed under templated headers such as “Data Reviewed”, unless the provider includes the pertinence of the test results, data credit again should not be given. EHRs have integration capabilities for ease of use, yet much like all the documentation, encounter notes became very convoluted in terms of content that was not relevant to current episodes of care. A large part of the 2021 guidelines and the Patients Over Paperwork Act, was meant to lean information and produce more meaningful electronic documentation. Copy and pasting or integration of test results, sometimes spanning several years, is not a best documentation practice and will not increase data complexity. What will support data complexity is clarity over external source(s) review, what is currently being evaluated through testing or discussions, and inclusion of the relevance or abnormalities of this data.