Blog
MRFs
Apples to Crabapples: Comparing A Hospital MRF To A Payer MRF
So here we go, on our first blog really getting into the meat of MRF pricing info. We at Serif Health frequently get questions about hospital MRFs and how the quality compares to payer MRFs. Which data set is more trustworthy? Which should you use when? Is this easy to do? Is it ‘apples to apples’? Read on and find out.
Published
8/25/2023
At this point, we’ve done enough "processing notes" blogs that it’s time to stop talking about how to locate and work with the data and just…work with the data.
So here we go, on our first blog really getting into the meat of MRF pricing info. We at Serif Health frequently get questions about hospital MRFs and how the quality compares to payer MRFs. Which data set is more trustworthy? Which should you use when? Is this easy to do? Is it ‘apples to apples’?
The short answer - it’s not easy, and both can be useful. But naively overlaying the different datasets just by a code number results in inaccuracy. It’s not exactly apples to oranges, but more…apples to…crab-apples? Whatever, I’m not a gifted writer, let’s roll with it. Read on for our learnings from cross-comparing these data sets via an example in Texas from Baylor Scott and White. You’ll learn where the issues arise, and how to best dodge them.
Defining a target
First off - to do any kind of useful comparison, you have to pick a target that’s well populated in both data sets.
There are many hospital MRFs that only contain gross charges - useless for comparing to payer rates. There are some that list tons of state-specific Medicare Advantage and Medicaid plans, but leave the commercial rate columns all blanked out. Also unhelpful.
Conversely, you have to consider which hospital and system TINs and NPIs are going to be well-listed across the payer MRFs and isolatable. If you pick a system that contracts with payers in a funky way, you might wind up not being able to locate them directly, you might wind up mixing in rates from system-aligned facilities that aren’t actually the hospital in question, or, for specialty hospitals, you simply might not find them listed as in-network.
Finally, you have to consider the employment arrangement of the hospital and how this might impact density and complexity of disclosures on both sides. If the hospital employs their physicians directly, they should be posting physician professional fees in addition to hospital (often called facility or institutional) charges and the payers should list all codes with professional fees as well. Seems simple enough, but if either MRF lumps these charges together or splits them apart in funky ways, prices will diverge.
For our analysis, we went with a pretty on-the-rails hospital MRF from Baylor Scott and White in Fort Worth. The MRF is well structured and clean, the commercial payers and plans included are named and distinct. We ran pulls against our payer MRF inventory with their EIN (751008430) and NPI (1669472387), which are trivially locatable in MRFs from all of BUCA, and translated the payer MRF json into simplified CSVs for the following payer networks:
- Aetna National PPO
- Cigna National PPO SAR
- Cigna Pathwell OAP
- UHC Choice Plus PPO
- UHC Nexus ACO
- Aetna TX HMO
- BCBS TX Blue Advantage HMO
- BCBS TX Blue Choice PPO
- BSW Premier HMO
The irony: BSW’s own health plan files were NOT well populated. We only got five rows of data for this particular hospital EIN in their Premier HMO file. Given that, we dropped BSW health plan from further analysis.
Running the comparison
Now comes the hard part. In order to compare across these different data sources, you have to normalize the data so that a code/price in each file matches a code/price in the other files. This is harder than it sounds, because there’s multiple different dimensions to validate, compare, normalize, and deduplicate.
We’ve already done plenty of other blog posts comparing payer-specific MRF quirks; to keep our apple analogy going, cross-comparing the payer MRFs themselves are like comparing a Gala apple to a Honeycrisp. Generally the same shape, but different properties :)
But the crabapple to apple comparison is what we’re focused on today. Specifically, across these different file types:
- Code type strings need to be normalized. “MS-DRG” !== “MSDRG” !== “DRG”.
- Code strings need to be normalized - the hospital MRF doesn’t post any leading zeros, UHC posts DRG codes with an extra leading zero which needs to be removed, APR DRGs can come with dashes, etc.
- Site of service (‘Inpatient’ vs. ‘Outpatient’ in the hospital MRF in this case) needs to be normalized with payer files that list CMS place of service code numbers (‘19’/’21’). Other payers only list a blank string for institutional rates, making this distinction impossible.
- Negotiation type needs to be compared and matched, but the enumerations for these values are different from hospital (‘Case Rate’, ‘Per Diem’, or ‘N/A’) and payer (‘case rate’, ‘per diem’, ‘percentage’, ‘ffs’). Logic is required to match what makes sense, and skip or drop what doesn’t (e.g. percentage types on the payer side). Since comparison logic here will be based on string matches, case sensitivity also needs to be corrected for.
- Billing class has to be exactly matched. This particular hospital MRF only has institutional rates, but some of the payers have listed data rows with ‘professional’ billing class which need to be dropped.
- Billing code modifiers have to be identical. You don’t want to compare different sub-components of radiology services or modifier-ed surgery rates to each other. In this case, the hospital file didn’t list modifiers so any payer data rows with modifiers are dropped.
- The hospital MRF has its own local ‘procedure code’ field and a lot of drug and medical device (NDC) data that is rarely present in the payer files - these wind up getting skipped.
- The hospital MRF has tons of revenue code rows with gross charge and cash rate pricing that are not actually reimbursed by payers and have no posted commercial rates. These rows also have to be skipped.
The net result for line by line matching is an absolutely disgusting comparison test that looks something like:
And this is the amount of logic you need AFTER choosing ‘clean’ examples and normalizing each of the fields to consistent strings and casing. Hmm. Given the warts on that logic, maybe I should have said hedge apple instead of crab?
OK, so once you have the line matching approach sorted out, you’ll immediately run into a new problem - multiple rates.
Aetna lists multiple negotiated prices for each outpatient CPT code with no distinction between them. $14252 seems to be repeated, but there’s no indication if that’s real or an artifact of some kind of contract mechanism.
Blue Cross lists multiple different percentages for some codes, and a negotiated rates along with a percentage for others:
UHC’s Choice Plus file only lists one price, but the Nexus ACO fileset lists every DRG twice with two different rates:
In all of these examples, which to pick? What’s fair? You can stab at a few different methodologies, but for the rest of this analysis we just took the highest listed rate for each unique row.
Simplifying our analysis
This gives us a really nice table of raw pricing data (link) that allows us to make some high level conclusions and simplifications.
Check out Aetna HMO vs PPO and BCBS HMO vs PPO:
As you can see, some payer rates are identical across product tiers, allowing us to drop the multiple Aetna, Cigna, and UHC networks. Hooray!
It does NOT apply to BCBS, which we probably could have inferred from the Hospital MRF from the start. The columns in their posting listed separate rates for BCBS by tier and no one else; that’s a strong hint to how the plans set up their contracts.
This is a key takeaway- hospital MRFs are quite helpful at indicating where tiered pricing exists for commercial payers. If you know the rates won’t change by product tier, you don’t need to process (or pay someone for) several different extracts from payer MRFs just to discover the prices are all the same.
Second, we can calculate a ‘fill rate’ for the data comparison by counting the number of rows where we have a matched rate for both the hospital MRF and the payer MRF relative to the code count (here we use the hospital file as the source value for code count).
As the table shows, DRGs are very well populated and dense in the table across all plans, while CPT code ranges look like an apple that’s been besieged by worms (didn’t want to bring cheeses into the metaphor). That’s a strong indication of which subset of this data is likely to be trustworthy and useful for analysis, and what I think is another key takeaway based on these files: if you want to cross-compare hospital and payer MRFs, stick to DRG’s.
The gap rate for different payers, plus the logic headaches necessary to correctly compare CPT (and chances you get bit by an underlying source data methodology issue) makes procedure data substantially more difficult to calculate, and to trust.
Delta Calculations and Summary Outputs
Now that we know which data elements are well populated and methodologically clean to compare, we can finally start to answer the question: which dataset is ‘better’? What data can you trust?
We can summarize and graph the relative deltas across the code ranges and conclude a few things.
- DRG match rates (defined here as the count of rows whose rates were populated in both files, and the delta between the price listed in the hospital and payer MRF files are below 5%) have some solid signals. CPT match rates, on the other hand are so bad, the CPT code ranges are not really worth further evaluation.
- Aetna's DRG match rate is really good, while Cigna comes in a reasonably close second. BCBS and United look atrocious, but when you graph this across the DRGs the picture gets more clear. BCBS data doesn’t match well out of the box, but the line tracking its percentage delta mimics Cigna and Aetna and stays north of the hospital MRF by about 20% for both the HMO and PPO (the lines in the chart are on top of each other). So it’s consistent in not matching, and thus perhaps not ‘wrong’ or junk data so much as it seems mis-baselined. In fact, after re-baselining the hospital MRF by +20%, the BCBS data match the hospital MRF data across 92% of all DRGs - a match rate even better than Aetna or Cigna! United, on the other hand, well, I think it’s safe to say the DRG rates they posted in their files are throwaways.
- Saving the most important observation for last, the fact Aetna andCigna are relatively in agreement with the Hospital MRF (and BCBS deltas track those closely after re-baselining) is a critically important thing. It’s a signal that multiple parties are generating reasonable pricing data based on some shared ground truth somewhere. If none of the data agrees, you’d have to throw up your hands and say we’re all just reading tea leaves and price transparency data’s a bunch of manufactured and/or manipulated garbage not worth pursuing (trust me, we’ve heard every possible polemic on this subject). But outside of UHC's DRG rates, that’s not what we’re seeing here.
Bad Apples
Ok, ok, I had to use the term. But there really are some bad apples in these files, and eagle eyes with the DRG chart probably already spotted them.
The huge spikes around DRGs 790 is due to likely errors in the Hospital file. Newborn and neonate DRG codes for inpatient stays have absurd per-diems listed in the hospital file in the low hundreds of dollars. I have had enough hospital bills from my kids’ births to know the per diem is universally far more than three hundred dollars even for normal newborns. Our son who spent three days under NICU blue lights to stabilize his bilirubin levels came out to tens of thousands per diem. The deltas are extreme for BCBS, but even Cigna and Aetna, who are largely in agreement across the rest of the DRG range, show up well above these prices posted. Conclusion - it seems highly unlikely the hospital source data is accurate for these rows.
Why they’d suddenly deviate and disagree or use a different methodology to price out some DRGs vs others is a question left to the reader and the SB 1137 compliance team at BSW.
Spot checking some other rows in the CPT tables, I was able to identify what are most likely errors in the source payer data making comparisons challenging. Check out this rowset from the Blue Advantage HMO:
Throughout this file there are tons of ‘percentage’ arrangements set at 57. Suspiciously, CPT 77076 and 73620 pro fees are listed as ‘negotiated’ rates but also at 57 - this is most likely bad data that’s misclassified by the payer.
Conclusion
While this is one cherry(apple?)-picked scenario, consistency between payer and hospital MRFs gives our team tremendous optimism that both hospital and payer price transparency data sets can be trustworthy, valuable, and useful for participants in the healthcare system.
Yes, it’s hard to work with. Yes, it requires tooling and infrastructure and domain knowledge. But, if you have the patience and willingness to identify the good apples, they’re out there. Our team is happy to provide our good-apple-finding technology to assist your organization - get in touch with us today!