Designing Metrics that Matter Part 2 by Rick Crump
Welcome to the second blog in our Measure to Manage series, where we’re exploring the importance and path to actionable information for managing your business, processes, and systems.
- How to Use Metrics to Track Performance and Initiate Corrective Action
- What Metrics are best?
- Performing a Metric Audit
In our last session we pressed home the need for your metric to be based on Relevant, Complete, Accurate, and Timely (RCAT) information in order be actionable. Today we’re going to begin the approach for getting there.
When a client comes to me in need of metrics and reports, they often have in mind what they want designed, and they honestly believe they’re beginning with the end in mind. What could be more end in mind than the metric itself? Well, for one thing, how about what you intend on doing with that information when you receive it?
Now, to be fair, managers have usually given a lot of thought to what the metric should look like and what information it should contain, and how often they want to receive it. However, when we draft a design of the metric(s) to their specs, and then have what I refer to as the ‘business conversation’ (i.e. how they’ll use the information for action), they often find the design to be incomplete at best.
For instance, measuring defect rates that for a given process, by product line seems like a good thing to know, but what are you going to DO with that information? “Well Rick, for one, I’ll meet with the managers overseeing the product lines with the highest defect rates and we’ll put together a plan to get more information.”
Ok, I agree that’s a necessary course of action, given the limited information you currently have; but I have few questions for you:
Do you know what additional information you’ll be requesting based off of those high defect rates?
If no, why would you wait until you have a problem to figure it out?
- Odds are you won’t be able to rely on historical information, unless you intentionally design and capture it to answer these new questions.
- By waiting until you have a problem you’re literally starting with no answers and have to wait for enough data to get them.
If yes, why would you wait until you have a problem to collect it?
- You could be collecting data now that will either help you avoid a drop in performance, or give you insights as to what to do about it when it happens.
Do you intend for this additional information gathering to be a one-time thing, or an ongoing collection from this point on?
If one time, how will you know if things get better in the future?
- Remember you’ll need a past to compare the future to in order to know whether you’ve moved the needle.
If ongoing, (again) why would you wait until you have a problem to collect it?
- Same points as previously stated.
A good metric should tell you at least two things:
- Where your performance is relative to your targets and thresholds.
- Where to look next if you’re not where you want to be.
And the more you have to go back for more information, the less likely you are to use the metrics you have in the first place; why? Because using your metric creates more questions than answers, which in turn creates more work when you use it. You’re literally punished for using your metrics.
Does any of this sound familiar?
In our next session we’re going to dive deeper into what it means to start with the ‘business conversation’ so you can design metrics that minimize additional data gathering and maximize insights.
After all, isn’t that the essence of a metric that matters?
Rick Crump is CEO and Principal Consultant at KineticXperience.
KineticXperience, is a consulting partner of KPI Fire.