Data

When are benchmarks, not benchmarks? This will be a long podcast, so strap in a little bit. The MGMA collected data from 2,500 practices. There was a webinar where the MGMA, in conjunction with WhiteSpace, which makes data warehousing effectively, offers data warehousing, did a presentation and talked about MGMA benchmarks. They spoke a bit of benchmarking and those things. But as we go through and listen to the webinar, there are very few benchmarks. It seems like there’s a general lack of understanding of what a benchmark is.

For example, sometime in the webinar, one of the panel member consultants suggests that listeners should want to know that their billing is performing up to the norm. But there’s no discussion of what a norm is and how they would see if they’re performing well compared to that norm.

This article was initially available on our podcast. Click here to listen.

Understanding MGMA Benchmarks: Metrics That Matter

There have been a total of two metrics provided until that point. Those two metrics are bad debt, which they measure in days (so they give an average of 9.1 days as the average bad debt), and the accounts receivable, which they give calculated as days also, which is 58.2. There is a whole series of problems involved in the specificity of these data and these MGMA benchmarks.

For example, if you are an extensive practice or a small practice or you’re orthopedic or general surgery or radiology, are these relevant to your practice? If you have 100% Medicare or you’ve got 100% workers’ comp or whatever your particular payer mix is, is this now relevant to you?

I would even suggest that these are not the most important KPIs. These are not the most important metrics that I would want to see coming out of benchmarking and a 2,500-practice dataset. That’s a lot of data.

What type of data?

There’s also no discussion about what data was collected. Did they get raw data? Did they ask questions like “What’s your average?” and users had to respond with answers instead of actual data? No idea. We don’t know anything about this.

They also don’t talk about how you should want to know if you’re performing well relative to these norms and how you figure that out. How do you collect your data? Further, how do you extract that data? Moreover, how do you clean up that data? How do you join that data? Also, how do you calculate these metrics yourself? Even if you wanted to compare those two metrics, again, nothing around that.

Denials

Then, we get to a section about denials. There’s a lot of hand-wringing and discussion around how terrible denials are, that everybody hates them and nobody wants to deal with it. Then, we drop into some metrics. Great, we have some MGMA benchmarks. This is fantastic. Also, there’s data. I’m excited. Further, I’m looking forward to hearing what this is.

A trend line is shown, and this is showing 2020 data. It shows the average amounts denied by month, expressed in dollars. Aside from that, it wasn’t clear immediately that this is actual data from the benchmark superset of data, which I hope it is. These are showing dollar amounts denied by month, and they’re in the range of $30,000, low of maybe $23,000-$24,000 or something like that and going up to as high as $36,000 in a month.

Comparing Practices: The Relevance of MGMA Benchmarks

The problem is, that’s not comparable, meaning you cannot use this information to compare your practice or any other practice to any other practice. It is entirely dependent upon your volume, so if you are charging $10 million a month or you’re charging $100,000 a month. You can’t compare those two. Those are radically different scenarios.

If you have denials by dollars and there’s no corresponding percentage associated with that, this is useless information. It suggests that people don’t even understand the concept of a benchmark. It has to be comparable, meaning you can use MGMA benchmarks to compare across practices, providers, billing entities, whatever it might be.

Is the graph right?

On top of that, this trend line shows that it started at the beginning of 2020, dropped in total denials on a dollar-amount basis as we go into April and May, and then steadily goes up from April-May until April-May December-January of 2020-2021. What does that tell us? Does it tell us that denials got better for a couple of months and then got worse for the next eight or nine months?

If denials on a dollar basis are going up, that suggests that denials are getting worse. But the reality is, we have no idea. This is an entirely useless graph, an utterly useless chart. It doesn’t tell us anything because we don’t know if the volume increased or decreased over time. Even if we know that the volume was rising from April-May until December 2020, we don’t see the rate at which that’s increasing. Therefore, we have no idea if the denials increase is outpacing the volume increase or lagging it compared to the MGMA benchmarks.

Are the denials improving?

It is terrible if we can’t understand anything and learn anything from this chart. Let me repeat that. A chart that vomits information at somebody is awful. We have no idea whether or not denials are getting better or worse. Moreover, what do you do with that information even if it gets worse?

The following chart then shows the average monthly amounts paid. This is in collections, presumably. The problem is, these are in dollars also. So we’re seeing a range of $6,000 to $11,000 in collections per month. That annualizes to in the field of $80,000 to $125,000 in total annual revenue. Also, if that’s the average across 2,500 practices, this is a very, very skewed subset of the American healthcare enterprise. Average of $120,000 in annual revenue – that’s a micro practice. That’s tiny. Also, that’s not even a single physician. In addition, that’s a part-time physician or, maybe, not even a full-time allied health professional. That’s minuscule, and that’s the average.

Remember the average, median, and mean. 

This is the mean, not the median; I imagine it is average, which means that dollar values of large practices should skew this high. That means the median is probably even smaller than this. So that means the typical practice here is minuscule. Minuscule. That may not help compare across practices when looking at the MGMA benchmarks.

Conclusion: Actionable Insights from MGMA Benchmarks

Again, if we see dollar amounts going up during some period, is that good or bad? If your volume of patients were going up, we would expect dollar amounts to go up. What do we learn from that? Nothing. Generally, payments and the volume of patients are very highly correlated. It’s pretty close to one. It’s a significantly, very high correlation. More patients – more revenue, fewer patients – less revenue. Not particularly earth-shattering there. It would help if you didn’t have a chart on that.

Then, we get to coinsurance. I’m going to skip that one because, again, we’re not learning anything from any of these things.

Then, the presenter starts talking about how they got denials on a new CPT code, which could skew your AR metrics, but there’s no demonstration of that. We would have no idea if new CPT codes skewed the denials metrics or the AR metrics as they suggest. There are no data presented to support that hypothesis.

If we look at denials in the last couple of months in that period (call it November 2020 to January 2021), the amount of denials on a dollar basis goes down. It goes down by about 5%. That means we’re doing well. Denials went down; dollars went down over denials. Fantastic! Except for the fact that if we look separately at the charges trend, it looks like charges dropped 10% approximately (I was doing the numbers off the top of my head during that time), which suggests that the denials rate went up. So if your denial chart shows a downward trend and it’s getting worse, that is a terrible chart because it is misleading at best, if not deceptive.

The following sets of charts (four of them) talk about average contractual obligation amounts. The presenter states that the contractual obligations don’t vary except that the eyeball says it ranges between $6,000 and $9,000 per month. If that isn’t running, I don’t know what is. That’s a 50% variance, 50%. That’s variance, but he says it doesn’t vary. It varies.

Still, what are we doing with that information? They don’t explain what denials are. These contractual obligations, presumably, are included in the denials calculation. So CO45s is a denial? That’s just a contractual obligation. That shouldn’t be a denial. That’s strange, and, worse, it’s useless if not misleading. I’m not sure what they’re doing in this. They have no understanding of denials, no understanding of MGMA benchmarks.

Even with that, they have charts on the average write-off amounts and personal responsibility. A PR1, PR2, and PR3 is not a denial. Personal responsibility means how much the patient owes. That’s not a denial. Not. Not only does that not make any sense logically, why would you include that? What is that telling you? Okay, denials went up. Well, no, denials didn’t go up. The amount that patients owed went up—no relationship to whether or not you’ve got a denial. On top of that, most inner definitions don’t include that.

What are we supposed to learn even from the monthly contractual obligations amounts? Also, what do we know? Further, what are we going to change based on that information? Even if it is trending up or trending down, what does that tell us? This entire webinar is a lot of what I would call navel-gazing, staring at your navel, picking lint out of it.

The scale on the personal responsibility amounts is from 0 to 120. It varies during the period of this chart (of about a year) from around the low 40s too; it looks like the mid-90s, maybe the high-90s. They don’t have an explanation of what that is. My guess, my interpretation, is that this is the average dollar amount in those personal responsibility amounts, but they don’t say that. Just some vague statement about “You need to talk to patients so that they know that they have to pay their share of cost again after the pandemic.” That’s not data. Also, that’s not benchmarking. That’s some vague consultants speak again.

Then, we go on to the summary at the end, where somebody shows up/throws up denial benchmarks. They show average monthly contractual obligations of $7,471, average monthly write-offs of $4,500, average monthly voids of $840, average monthly personal responsibility of $65. That doesn’t even make any sense. You can’t have an average monthly personal commitment of $65. What they probably mean, again, is the average personal responsibility per patient encounter presumably, not per line item. Per patient encounter, it’s perhaps $65.

They state that this is a good takeaway slide. No, this is an abysmal takeaway slide. This isn’t very pleasant. In addition, this is useless. These are not MGMA benchmarks. These are useless averages for a practice. How are these ever comparable? They’re not standardized into something that you can use to compare anybody else to. At the very least, it depends upon your volume. They suggest you compare your numbers to these numbers, talk to your revenue cycle management team, and see how you compare. God, if somebody did this, I would fire them. That’d mean they don’t understand the concept of benchmarking.

I’m going to quote you the summary statement on that. This is what the presenter said, “It’s okay to be above these numbers. It’s okay to be below these numbers.” First of all, no, it’s not. I’m continuing now, and I quote, “As long as, when you’re looking at your numbers compared to these MGMA benchmarks, you look at those numbers and say, “Here’s my story. Here’s how our practice went through this, and here’s how our practice either succeeded or here’s an area of opportunity for us,” it’s okay to be not exactly at these numbers. And if the number is not where you want it, you have a good action plan to take back to your finance and RCM teams in how to make that number more “pleasable” to us and where we want it to go.”

Holy crap, that is the giant, most significant load of garbage I’ve ever heard. What was that statement? You have an action plan. Oh, yeah, where did that come from? Based on you comparing your monthly contractual obligations in dollars to theirs. What?

This is so painful for me to listen to. The reason why is the purpose of MGMA benchmarks is to have actionable information. Useful MGMA benchmarks would allow you to take some information from other providers that presumably are similar in some respects to you such that it can provide some insight into where you are relative to your peers. However, more importantly, where you can strive towards, what you should be able to achieve, what you should target, what you should put in place to go after. And then, hopefully, out of that would come to some action plan to achieve that. But they don’t do any of that.

Not only do they only give you a couple of things like averages, but even if that data were helpful, those averages, even if it was in a format that was comparable. For instance, meaning let’s say it was on a percentage basis or something like that that you could compare across practices, that still doesn’t give you anything to shoot for. It doesn’t give you a target. So even if they give you an average, they should provide you quartiles, top 10%, something like that, something that you should use as a metric to go after. So if the average for the industry is 58%, the top 10%, meaning the 90th percentile, is at 34 days, we can use that as a target to strive towards for our practice or our billing industry or company or whatever it might be.

The fact that they don’t know anything about benchmarking, basically anybody in this panel (unfortunately, I include in that the person titled “The Vice President, Industry Insights & Business Intelligence for the MGMA” Andrew Swanson), is clearly shown here at the end because there are a series of questions then that occupy a lot of the time in this webinar. The questions are things like, “Patients that are upset about leaving a credit card on file, what would you say to them?” This is a benchmarking webinar. How does that have anything to do with benchmarking? Benchmarking is about data.

I’m not saying that’s not a helpful question in a different context. That may be helpful for somebody to discuss, but it has nothing to do with benchmarking. There’s no discussion around, “Is that a problem?”

If they had said, “46% of patients coming in are upset about leaving their credit card on file; we found in benchmarking,” okay. Or the top three problems encountered when looking at patients based on these data are A, B, and C. A (number one) is that patients are upset about leaving a credit card on file. They’re at 36%. But no, they don’t do anything like that. They just field questions from the crowd that are entirely unrelated to benchmarking.

The next question was, “How many are using text for mobile payments?” Nobody comes up with any data. One presenter says something like, “I see that happening more and more.” Wow, that’s a great benchmark – more and more. More and more is a quantitative measure that is approximately 1.23 shitloads.

The WhiteSpace CEO said that a higher collection rate results from more mobile payments and using text to send patient statements to patients. Really? How much higher? That would be great information, and that would be accurate data. Even if it’s not benchmarking, that would be actual data. What is the average patient collection rate? And then, how does it break down when you text or don’t text a patient that statement? And then, what’s that variance? That would be phenomenal. That would be beneficial information, and that would be benchmarking if that data was provided. But it’s not.

Someone talks about not getting paid what you should get paid by the payers, but there’s no explanation of how you quantify that or what that rate is. What percentage are not getting paid? What quote should be getting paid?

There’s a separate conversation around, “What are contract compliance and contract management? And how do you determine whether you’re paid appropriately and all the different factors that come into play in modifying the expected reimbursement?” That’s a super complex subject.

Assuming that you have solved that and that all of these have that information, we’re talking about what percentage of payments are incorrect. There’s no mention of that, much less an explanation of even how to figure it out or what those MGMA benchmarks are, what the average is.

It just jumps into having some vague suggestion about you should go back to payers and talk to them about underpayment. How do you even know you get underpaid or what percentage of your payments got underpaid? Is this 3%, 97%? How many dollars does it represent? Is this a $10,000 problem or a $10 million problem for this individual provider?

I’m not just upset. I’m not just on a rant saying this is all garbage, this is all stupid, and these people don’t know what they’re doing. They don’t. But what’s clear is that the speakers who are supposed experts don’t even understand the concept of benchmarking.

Let me come back to it again. Suppose there is data you can use to compare to your practice, provider, group, network system, billing company, whatever it might be, that information. In that case, there’s a whole separate process in how you get your data out of your systems and analyze it so that it is now comparable.

The purpose of the MGMA benchmarks is to compare your data to others. You can also identify a target specifically that you can shoot for and then put together a plan to go from where you are currently to that future target so that you see some improvement and that improvement results in dollars in a bank account. That’s the purpose of benchmarking.

If it isn’t all going in that direction and isn’t all substantiating and supporting and doing that, then everybody’s wasting their time. That’s really what we should be focused on. It’s, “Is what we’re doing generating more revenue? And can we document that it’s generating more revenue, that we’ve achieved success as a result of benchmarking?” Otherwise, it’s just navel-gazing.

Author

voyant

Leave a comment

Your email address will not be published. Required fields are marked *