Denny_M_Info-PicMeasurement is a good thing. Maybe there are a few naysayers out there who believe you can’t measure this psychotherapy stuff but the fact is whether you believe it or not, measuring whether what you do is effective is the future. We are moving rapidly into a pay for performance (PFP) reimbursement model where improvement will have to be monitored. Even if you aren’t in a PFP model, you will likely be seeing more capitation reimbursement coming your way. In both cases, overtreatment can be financially deadly. The goal is to provide the right amount of care at the right time. So, whether you agree or not, you will be held accountable to measurement. But what to measure? I believe there are three buckets – symptomatology, functionality and satisfaction.

In general, symptomatology is for acute problems. Functionality is for more chronic issues and satisfaction is for everybody. Why satisfaction? Because it, more than almost anything else, determines whether people follow through with their treatment.

For the purpose of this blog, I want to talk about functionality. There are a lot of functional measures out there. Some good and some, like the thank-God-it’s-dead-rubber-ruler Global Assessment of Functioning (GAF) score are not so good. That’s probably a little unfair to characterize the GAF in that manner but I’ve seen the GAF misused repeatedly. The most egregious was when a psychiatrist documented that a patient scored 2.5 on the 100 point GAF scale. A 2.5 on the GAF is roughly the functioning level of a cucumber. Used correctly, the GAF could be okay, but it rarely was used correctly. One functional measure, the DLA20 did a pretty good job of decreasing the inter-rater reliability of GAF scores by providing anchor definitions for the 20 functional areas of the DLA20. However, as of DSM-5, GAF is no longer around and that’s probably a good thing. It’s been replaced, or more accurately, it’s been recommended that the World Health Organization Disability Assessment Schedule (WHODAS) 2.0 be used instead.

Measuring someone’s functionality (using a psychometrically valid and reliable tool of course) can be a valuable tool for clinicians, consumers and loved ones especially when the consumer has a serious mental illness. Tracking how well someone is doing facing life’s challenges can help all parties by prioritizing those challenges and celebrating successes. Unfortunately, there has been a trend of late in which functional assessments are being used to determine levels of care. I say “unfortunate” because many, perhaps most of these tools were never designed to serve this purpose. It is an understandable error. Intuitively, the lower someone is functioning, the higher the probability that they will need more intensive services and the higher the functioning, the less intensive are the care needs. I get that. The problem is unless the instrument has been normed so that certain stores or score ranges have been demonstrated to correlate with specific levels of care, the instrument is being used outside its intended (and psychometrically normed) purpose.

Consider the Beck Depression inventory (BDI). We know that someone with a high BDI score is more depressed than someone with a lower score. Specifically, BDI scores follow the following breakout:

  • 0–13: minimal depression
  • 14–19: mild depression
  • 20–28: moderate depression
  • 29–63: severe depression.


That’s helpful information clinically. A higher score means more severe depression. But imagine someone saying “anyone with a BDI score of 29-63 must be hospitalized”. The BDI was never designed to determine levels of care like that. Until someone norms the BDI to do that, using a BDI to determine level of care is beyond the psychometrics of the instrument. But that’s what is happening in the industry with functional assessment scores.

In addition to the inherent problems of using these instruments in ways they weren’t intended to be used, there is the other problem of – let’s be generous and say – “scoring error”.  I don’t mean that the person scoring the instrument doesn’t know how to do so, though that does happen. I’m referring to the kind of scoring that seems to maximize funding that is dependent on the scores. It’s a sort of Pavlovian reinforcement thing. If lower scores mean lower functionality and lower functionality yields higher reimbursement, guess what? Scores start drifting south. Sometimes this is clearly “fudging the numbers” but I really believe more often this is unconscious on the part of clinicians. It does happen though and most of us know it happens. In one state I am very familiar with, organizations got higher case rates for more functionally impaired clients – as it should be. Interestingly, despite the obvious opportunity to maximize revenue, the CMHCs in the state were told that the data indicated they weren’t seeing the most impaired consumers. The CEOs of the CMHCs knew that wasn’t true and eloquently responded “huh?”.  As it turned out, while there was an increased financial incentive between a level 5 (most impaired) and a level 4 (less impaired), the documentation required to justify a 5 over a 4 was significant and the Case Managers filling out the forms figured out it just wasn’t worth the effort. The moral: Pavlov was right – rewards are where you find them.

Population management is one of the drivers for the level of care assessments. Providing the correct level of care and being able to determine when consumers are able to move to less intensive levels of care is critical to managing populations. As mentioned earlier, capitation and PFP models both require population health management skills and in capitated environments, it’s more important that the measurements be accurate than to have them meet some preconceived expectation.

OK, if functionality measures are not appropriate to this task, what is? Fortunately, the good folks at Mental Health Center of Denver (MHCD) have a solution. If you aren’t familiar with MHCD you should be. MHCD wants to be not just a leader in recovery oriented services, they want to be the leader. From where I sit, I’m hard-pressed to identify anyone doing it better. Some of you may know that MHCD developed a four pack of recovery oriented assessment tools that are psychometrically sound and clinically useful, but they also tackle a much harder task – changing culture.


Engaging with clients at MHCD

I’ve worked with MHCD for years and I know that the four assessment instruments are kind of a package deal – that their ideal, best use is to implement all of them. But it was brought to my attention recently that one of the tools, the Recovery Needs Level (RNL) works just fine as a standalone level of care instrument and it does so from a recovery perspective – something that even the best functional assessment products cannot do. In fact, one State Hospital we are aware of recently chose the RNL over a popular functional assessment tool for this very reason – that recovery wasn’t being addressed in the functional assessment tool. In addition, the RNL has been developed in such a way that “gaming” the scoring described above is not possible.

At MHCD, the RNL allows appropriate placement in, and movement between, five levels of care:

  • Assertive Community Treatment – Staff to consumer ratio of 1 to 12
  • Intensive Case Management- Staff to consumer ratio of 1 to 25
  • Enhanced Outpatient – Staff to consumer ratio of 1 to 40
  • Outpatient- Staff to consumer ratio of 1 to 80
  • Psychiatry Only

You will notice that the Assertive Community Treatment (ACT) plays prominently in MHCDs continuum. I have been known to have some strong opinions about ACT teams or more specifically, the practice by state governments of mandating ACT with “high fidelity”. We will not go into this issue in detail here except to say that one of those “high fidelity” issues made it almost impossible to get someone off an ACT team – because that’s the way they did it when ACT was conceived. Here’s where those rowdy folks at MHCD scored extra points in the Denny Morrison Hall of Fame. They looked at the “fidelity” requirements of ACT and said “no thanks”. Okay, I’m really not sure anyone actually said that but they did challenge the ACT tenet of keeping people on an ACT team indefinitely by using the RNL to determine when a less intensive level of care is appropriate.  Hmmm…letting the data determine level of care – what a concept! At MHCD, ACT is a level of care, not a way of life.