Tuesday, January 25, 2011
Saturday, January 15, 2011
Jim Champy, coauthor, with Harry Greenspun, of Reengineering Health Care: A Manifesto for Radically Rethinking Health Care Delivery, introduces a lesson on the pitfalls of measurement from Faster, Cheaper, Better: The 9 Levers for Transforming How Work Gets Done, by Michael Hammer and Lisa W. Hershman.
The late Mike Hammer always delivered the unexpected in a strong voice with an intelligent edge that woke you up. When we coauthored Reengineering the Corporation, I discovered that no partner could have been more insightful, more probing into the behaviors of companies and their managers. Mike also had a great talent for metaphor. He said that inefficiencies were like fat marbled into a piece of meat, and that to get costs out you had to grind up the company and fry out the fat. That metaphor never made it into our first book. I told Mike that executives wouldn’t respond well to the notion of treating their companies so brutally.
But that didn’t stop Mike from being a radical thinker, always challenging the way things are done. He disdained the notion “if it ain’t broke, don’t fix it.” In this excerpt, from the book that Mike was working on before his untimely death at age 60 in 2008 (a work completed by his colleague Lisa W. Hershman), you will see that even things that look right can be wrong. Read it several times to grasp everything that’s here on how managers misuse metrics and measurement processes — sometimes unwittingly, sometimes purposely to deceive. It’s quintessential Hammer.
— Jim Champy
Excerpted from Chapter 2 of Faster, Cheaper, Better:
The 9 Levers for Transforming How Work Gets Done
In the sixth century Pope Gregory the Great formulated his famous list of the seven deadly sins — gluttony, greed, wrath, lust, sloth, envy, and pride. There are also seven sins of corporate measurement. Gregory’s list was meant to help an individual’s quest for salvation. Ours is more mundane: saving companies from fatal flaws in performance measurement.
Vanity. One of the most widespread failings in performance measurement is to use measures whose sole purpose is to make the organization, its people, and especially its managers look good. As one executive said, “Nobody wants a metric that they don’t score 95 on.” This is especially true because bonuses and other rewards are usually tied to performance measures. For instance, in distribution logistics, it is common for companies to measure themselves against the promise date — that is, whether they ship by the date that they promised the customer. A moment’s impartial reflection shows that this sets the bar absurdly low — a company need only promise delivery dates that it can easily make in order to look good on this metric. Even worse, companies often measure against what is called last promise date — the final date promised the customer, after changes may have been made to the delivery schedule. It takes real effort not to hit the last promise date. Moreover, achieving good results on the last promise date has no larger significance for company performance; it does not lead to customer satisfaction or any other desirable outcome. All you have to do is keep promising a later date. Even if you manage to hit that target 100 percent of the time, it’s likely that your customer wanted the product days, weeks, or even months ago, so don’t go patting yourself on the back.
A far better metric would be performance against customer request date. But achieving that goal would be far more difficult and might lead to managers not getting their bonuses. When executives at a semiconductor manufacturer proposed shifting from last promise date to customer request date, they encountered widespread resistance.
A metals refiner had been using yield — the percentage of raw material that was turned into saleable product — as a key performance metric, and everyone was very pleased that this figure was consistently over 95 percent. An executive new to the company made the observation that this figure glossed over the difference between high-grade and low-grade product. The refinery was supposed to produce only high-grade product, but poor processing sometimes led to low-grade product. The company then started to measure the yield of high-grade product and discovered that figure was closer to 70 percent. That was a much more meaningful representation of the refinery’s real performance. Unsurprisingly, that insight did not generate a lot of enthusiasm.
Provincialism. This sin permits organizational boundaries and concerns to dictate performance metrics. On the surface, it would seem natural and appropriate for a functional department to be measured on its own performance. That is, after all, what its managers can control. In reality, however, measuring so narrowly inevitably leads to suboptimization and conflict. One insurance company CEO has complained that he spends half his time adjudicating disputes between sales and underwriting. The sales department is measured on sales volume. Not surprisingly, the sales force tries to sell any willing customer. Underwriting, on the other hand, is measured on quality of risk. Naturally, the underwriters want to reject all but the best prospects. The two departments clash constantly. If the salespeople win, the company will be paying out more in claims. If the underwriters win, revenue will be less than it would otherwise have been. Higher costs or lower revenue? The top brass has to choose between two evils.
Narcissism. This is the unpardonable offense of measuring from one’s own point of view, rather than from the customer’s perspective. One retailer measured its distribution organization on how well the goods in the stores matched the stock-on-hand levels specified in the merchandising plan. They had a satisfying 98 percent availability when measured in this way. But when they thought to measure to what extent the goods in the stores matched what customers actually wanted to buy, rather than what the merchandising plan called for, they found the figure was only 86 percent. Another retailer measured goods in stock by whether the goods had arrived in the store; eventually the company realized that simply being in the store did the customer no good if the product wasn’t on the shelf — and on-shelf availability was considerably lower than in-store availability. These companies measured things that interested them, not their customers.
A consumer goods maker managed its distribution operations by focusing on the percentage of orders from retailers that it filled on time. Sounds sensible. By tracking, reporting, and relentlessly seeking to improve this number, the company got it up to 99.5 percent consistently. That’s the good news. The bad news is that when the company happened to take a look at the reality of retailers’ shelves — which is what consumers see — it found that many of its products were nonetheless out of stock as much as 14 percent of the time. Many companies measure the performance of order fulfillment in terms of whether the shipment left the dock on the date scheduled. This is of interest only to the company itself. Customers care about when they receive the shipment, not when it leaves the dock. Perhaps the most egregious instance of narcissism that we have encountered was at a major computer systems manufacturer. This company measured on-time shipping in terms of individual components; if it shipped, say, nine of ten components of a system on time, the company claimed a 90 percent score. The customer, of course, would give the company a 0 percent rating, since without all ten components the system is useless.
Laziness. This is a trap into which even those who avoid narcissism often fall: assuming you know what is important to measure without giving it adequate thought or effort. A semiconductor maker measured many aspects of its order processing operation, but not the critical (to customers) issue of how long it took from the time the customer gave the order to the time the company confirmed it and provided a delivery date — simply because the company never thought to ask customers what was really important to them.
An electric power utility assumed that customers cared about speed of installation and so measured and tried to improve it, only to discover later that customers cared more about the reliability of the installation date they were given than speed of installation. Companies often jump to conclusions, measure what is easy to measure, or measure what they have always measured, rather than go through the effort of ascertaining what is truly important to measure.
Pettiness. Too many companies measure only a small component of what matters. Executives at a telecommunications systems vendor rejected a proposal to have customers perform their own repairs because that would require putting spare parts at customer premises, which would drive up spare parts inventory levels, a key metric for the company. It lost sight of the fact that the broader and more meaningful metric was total cost of maintenance, which is the sum of labor costs and inventory costs. The increase in parts inventory would be more than offset by a reduction in labor costs produced by the new approach.
Inanity. Metrics drive behavior, but too many companies implement metrics without giving any thought to the consequences of these metrics for human behavior and consequently for enterprise performance. People in an organization will seek to improve a metric they are told is important, especially if they are compensated on it and even if doing so is counterproductive. For instance, a regional fast-food chain specializing in chicken decided to improve financial performance by reducing waste, which was defined as chicken that had been cooked but unsold at the end of the day and then discarded. Restaurant managers throughout the chain obediently responded by driving out waste. They told their staff not to cook any chicken until it had been ordered. Thus did a fast-food chain become a slow-food chain. Yes, waste declined, but sales declined even more. Managers might keep in mind this variant of an old adage: “Be careful what you measure — you may get more of it than you want.”
Frivolity. Not taking measurement seriously is perhaps the most grievous sin of them all. The symptoms are easy to see: arguing about metrics instead of taking them to heart, finding excuses for poor performance instead of tracking root causes, and looking for ways to blame others rather than shouldering the responsibility for improving performance. If the other errors are sins of the intellect, this is a sin of character and corporate culture. An oft-heard phrase at one financial services company is “The decision has been made; let the debates begin.” When self-interest, hierarchical position, and voice volume carry more weight than objective data, even the most carefully designed and implemented metrics are of little value.
As with the seven deadly sins, the sins of measurement often overlap and are related; a single metric may be evidence of several sins. A company that commits these sins will find itself unable to use its metrics to drive improvements in operating performance, which is the key to improved enterprise performance. Bad measurement systems are at best useless and at worst positively harmful. And don’t be fooled by the old adage “That which is measured improves.” If you are measuring the wrong thing, making it better will do little or no good. Remarkably, these seven deadly sins are not committed only by poorly managed or unsuccessful organizations; they are rampant even in well-managed companies in the forefront of their industries. Such companies manage to succeed despite their measurement systems, rather than because of them.
— Michael Hammer and Lisa W. Hershman
Excerpted from Faster, Cheaper, Better by Michael Hammer and Lisa W. Hershman © 2010 Hammer and Company. Reprinted by permission of Crown Business, an imprint of the Crown Publishing Group.