https://buy-zithromax.online buy kamagra usa https://antibiotics.top buy stromectol online https://deutschland-doxycycline.com https://ivermectin-apotheke.com kaufen cialis https://2-pharmaceuticals.com buy antibiotics online Online Pharmacy vermectin apotheke buy stromectol europe buy zithromax online https://kaufen-cialis.com levitra usa https://stromectol-apotheke.com buy doxycycline online https://buy-ivermectin.online https://stromectol-europe.com stromectol apotheke https://buyamoxil24x7.online deutschland doxycycline https://buy-stromectol.online https://doxycycline365.online https://levitra-usa.com buy ivermectin online buy amoxil online https://buykamagrausa.net

You improve what you measure, but what you measure may not be what you want improved

There's a certain stupid charm to the truism that if something cannot be measured it cannot be improved. But that assumes that the measurement is germane (actually applicable to what you want improved, not just a number that's easy to collect or makes a bold statement), and that the only way to improve the measurement is to improve what it is supposed to be measuring (instead of faking the numbers, or focusing on improving the numbers to the detriment of other activities).

Unfortunately, in education (and medicine, and policing — heck, in a lot of business, too, where this all was developed), the measurements picked are dubious, the gaming of the numbers is widespread, because the consequences of not meeting the desired measurements have been made so dire.

Take this example. Your boss can't easily measure the subjective what you do, but knows that it involves a lot of copy work. So your boss starts counting how many times you visit the copy machine. More copier visits, more activity and therefore better work being done.

Except that some activities don't require use of the copier (so you start avoiding or delegating those activities). And there's nothing stopping you from doing the copying involved in two copier runs (two visits!) instead of a more efficient single visit.

But it's an easy measure, one that can be posted on the break room wall and used as the basis for merit increases, and so the only unsatisfied people left are the customers and the guy who's wondering why copier costs are up 90% this quarter …

Or take one of my business favorites: help desk turn-around time. Good service means people's needs are being met quickly, so we'll count the number of tickets closed and the quickness with which they are being closed. Easy!

Except that means that a difficult problem is going to ruin the metrics. The simple answer is to close the ticket each time you finish an interaction; if the problem requires six short-turn-around tickets rather than one long one … well, that's an improvement, based on the metrics. Or it means that workers find reasons to escalate long-solution-time tickets upwards to higher grade technicians, even if that's not an efficient use of their time.

Another place I've seen this (frequently) is in demands for a hard Return on Investment. Because of the distortion of executive management's own metric of return on earnings or net margin, significant investments require an RoI number to show how they will fit into that higher-level accountability metric.

But some such numbers are impossible to come by. A system that makes it easier to find the right person for a job proposal sounds like a great idea, but how do you determine how much money that will actually save (from hiring costs) or what percentage of new work will be obtained because of it? You can't, without a lot of SWAGgery, and as a results, such systems either don't get funded (even if there's general agreement that it's a "good idea"), or are justified based on shaky, subjective, cherry-picked, or consultant-driven numbers (at best a re-introduction of subjective judgment the process is trying to exclude, at worst lying for "a good cause").

The bottom line is measurement for the sake of measurement is meaningless. The Real World is analog, not digital. Complexities belie simplification to round numbers. Measurement needs to be an accurate proxy for what it is you are trying to improve. And setting target measurements will incent behavior to meet those targets, often with undesirable side effects.

A former boss of mine once said that the metrics we were collecting couldn't be used in a vacuum; that they were meant to be the basis for a further discussion, not an automated percent pay boost or drop: to help understand why the numbers were showing what they were, and even possibly to point to ways to improve the way the process is being executed (or improve the way the measurements are being gathered). They're words I've always tried to take to heart.




The Costs of Accountability
The ballooning demand for misplaced and misunderstood metrics, benchmarks, and performance indicators is costing us big.

View on Google+

84 view(s)  

2 thoughts on “You improve what you measure, but what you measure may not be what you want improved”

  1. My favorite example from my former time in proposals – in that department, we would respond to an RFP that would be due on September 1. Initially. After releasing question responses on August 23, the customer may change the RFP due date to September 8. In addition, a response to an RFP is a multi-departmental effort that requires a lot of coordination and approvals.

    Enter the "quality for quality's sake" person, who asked, "All of these RFPs are due in an average of 27.274 days. Because everyone is always inefficient, I propose to improve this so that RFP responses are returned in 23.285 days."

    I exaggerate slightly, but the quality person really wanted us to get things to the customers well before their due dates. Never mind the last minute changes that the customers themselves made to the RFP, which would require a lot of rework.

    But that wasn't measured.

  2. I can see having that measure (average is due in X, average is submitted in Y). Analyzing that to look for efficiencies in each step could be helpful. Targeting being ready ahead of the submission date could also be helpful, if it is there for a contingency against being late and with the awareness (again, from analysis) that the client may change things at the last moment (to include looking ways to make proposals less brittle and more adaptable to the most common types of changes).

    A blanket "Let's improve each step Z% so that well be ready Z% earlier!" is, of course, silly.

Leave a Reply

Your email address will not be published. Required fields are marked *