This post is another in my series of posts on healthcare risk adjustment. As I promised in my last article on RADV, this one will be a less tedious read!
I was commenting to a colleague today that a lot of folks involved in risk adjustment tend to fixate on the complexities of the model such as disease mappings, calibration, and performance. However, the complexities of the methodology can be just as nuanced and in many cases, of greater practical significance.
Prospective vs. Concurrent Models
One of the important methodological decisions in risk adjustment is whether to implement a prospective or a concurrent model.
Prospective Model: We estimate the next year's healthcare spend of a member using medical claim data from a given year.
Concurrent Model: We utilize claim data from a given year to estimate the spend for the same time period. That sounds a bit odd - we know what a member costs in a given year, why would we want to concurrently "estimate" it? Well, that is a good question - maybe I'll answer it if nudged!
Pros and Cons
Back to concurrent and prospective. They both have pros and cons.
Concurrent advantages:
- Much more accurate than prospective models
- Moves more money around (more reflective of actual spend)
- Aligns payment adjustments to incurred cost within a year
- Important if there is significant churn in membership
Prospective advantages:
- Provides certainty: signals to participants what sort of risk adjustment transfer to expect
- As these transfers get bigger in Medicare, Medicaid, and commercial programs, it is increasingly important for organizations to have some advance knowledge of them
A Hybrid Approach: Blending Both Worlds
But why choose between concurrent and prospective? What if we start with prospective scores in January, and end at fully concurrent risk scores in December (blending the scores along the way if needed)? A blend of both worlds might be best for certain situations.
How It Works
Let's paint a scenario and assume that we are trying to adjust payments for 2019 for some program. Here's the workflow:
- January (Beginning): Use 2018 data to prospectively score members and let market participants know the scores
- Throughout the Year: These scores are highly correlated with concurrent scores, so knowing them at the start gives participants a useful expectation of risk adjustment
- December (End): Compute scores concurrently for more accurate risk adjustment
Methodological Flexibility
There are a myriad methodological options that become available with this approach. These options can be tailored to suit a given program's objectives. For example:
- Base X% of risk adjustment transfers on prospective scores and (1-X)% on concurrent scores
- Vary X to shift the blend between certainty and accuracy
- Different blend ratios can accommodate different program needs
Cost and Implementation
Sure, there might be an additional cost to running two models (you would run them at the same time every year). But for an efficiently run program, the cost will be smaller than double. Further, the total implementation costs are insignificant anyway against the backdrop of tens of billions of risk adjustment dollars in the US healthcare system.
Ensemble Modeling
A few of my readers would be familiar with the term ensemble modeling. In this sort of modeling you leverage more than one model in ways that mitigates the weaknesses of any given model. The result is a more accurate model. Perhaps an ensemble methodology can unlock similar benefits.
Conclusion
I end at where I began: A lot of focus has been on risk score modeling, however, risk adjustment methodology deserves some attention as well! There are many (as yet unanswered) questions embedded in a blended approach that would be great topics for further research.
Additional Reading
The above article is simplified (understatement!). Further, I doubt that a blended approach would work well in all situations. If interested in a primer on risk adjustment (including concurrent/prospective), please refer to: https://www.soa.org/research-reports/2016/2016-risk-scoring-primer/
Note: The opinions expressed do not necessarily reflect those of my colleagues or employer.