Metrics: How to Improve Key Business Results Part 26

You’re reading novel Metrics: How to Improve Key Business Results Part 26 online at LightNovelFree.com. Please use the follow button to get notification about the latest chapter next time when you visit LightNovelFree.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy!

The short length of this chapter is intentional in order to help you focus on the power of metrics. You must respect the power of metrics and be careful of the damage they can do to your organization if wielded improperly.

When I was a young Airman in the United States Air Force, I had the privilege to work with a civilian electrician, Tom Lunnen. Tom was a no-nonsense guy and a good friend. He was older than me (still is) and helped me with good advice on more than one occasion. Perhaps the best advice he gave me was to respect the power of electricity because even trained electricians have been badly hurt-or worse-performing electrical work. Electrical injury is the second leading cause of fatalities in the construction industry.

I bring this up because metrics are like electricity. Metrics can be used to do a lot of good. As a tool, they can help us understand our environment. They can help us evaluate how well our efforts are going. They can make communication easier and clearer. But like electricity, metrics must be respected. If you follow the rules, you can use electricity-and metrics-to make life better. But, even if you follow the rules, there remains the potential to cause damage. The risks are high enough that you have to decide in each case whether the benefits make it worth it.

Most of this book discusses how to develop, a.n.a.lyze, report, and most importantly, use metrics for improvement. But unlike most organizational improvement tools, like training plans, strategic plans, and employee recognition programs, metrics can do as much harm as good if used improperly. And in most cases, it's because the wielder of the data isn't well-trained or wary enough to understand the powerful but risky nature of metrics.

Metrics have the potential to do more harm than good.

As I've already covered, you have to work hard to get to the right root question: develop an abstract picture of the answer, identify the information needed to paint the picture, and then painstakingly set up processes for collecting, a.n.a.lyzing, and reporting the metric. And through all this, you have to double- and triple-check everything from the data to the collection methods to the metric itself, and then, finally, the root question.

Be diligent and rigorous in your efforts because it is extremely easy to make errors. Even when all of your data are verified, you can have errors in interpretation. I once had an interesting debate with a coworker over the concept of facts. He felt that metrics, at least good ones, were facts; and if they weren't facts, we shouldn't use them to demonstrate performance. I had to explain, at least from my view, that metrics are not facts. Metrics are first and foremost indicators. They give us insight, but they are not necessarily the truth that is being sought. They are not facts.

Metrics are not facts. They are indicators.

Metrics: Indicators or Facts?.

This distinction as to whether metrics are indicators or facts is at the core of proper metrics use. If we treat metrics as facts, we run the real risk of making decisions too hastily.

How about measures or data? Is the speedometer on your car relaying facts about your speed? Is it precisely accurate or does it have a +/- deviation? If your speedometer says your going 55 and the police radar says you're going 58, which is truth?

You may argue that while there are variances in measuring devices, there is obviously a true speed you were traveling at. And, I'd agree. You were definitely traveling at a specific speed at a given moment. But, I have little faith that any device used to capture a particular moment in time is accurate enough to call that measure a "fact."

Let's try subjective measures. Say I ask you to rate your satisfaction with my service on a scale from 1 to 5, with 1 being highly dissatisfied, 2 being dissatisfied, 3 being neutral, 4 being satisfied, and 5 being highly satisfied. I should be able to consider your choice to be a fact, right?

Wrong.

The only fact that I can ascertain on a survey is that the answer I receive is the answer you gave. And even then we may have errors. In customer satisfaction surveys, we often find that respondents get the numbers inverted and give 1s when they meant to give 5s. Barring this type of error, can't we say the results are facts? Again, the only thing we can categorically attest to is that the answer we have is the answer the respondent chose. We cannot know for a fact that the answer given was the true answer.

This uncertainty has been a.n.a.lyzed and researched to the point where I can say with confidence, that most answers are actually not true. In The Ultimate Question(Harvard Business Press, 2006), Fred Reichheld researched the best customer satisfaction questions to ask to determine potential business growth. His study was based on responses from promoters (those who would recommend a product/service) and detractors (those who would steer people away from a product/service). One by-product of this effort was the realization that people don't answer surveys in a totally truthful manner.

Basically, Reichheld found that on a 10-point-scale question, a "6" is not truly neutral. Most people who felt neutral about the product or service being rated actually gave 7s or 8s, although this range was clearly marked as being more favorable than neutral.

I believe this happens on a 5-point scale also. Most customers don't want to give you a "3" if they feel ambivalent about the product or service. Let's look at a simple translation, shown in Table 14-1, which I propose is much closer to the truth for the majority of respondents of a customer satisfaction survey.

What I've found is that unless the respondent was actually angry about the service, he won't give it a "1." Therefore, 2s become the choice of the very dissatisfied (those just short of angry). Threes are given by customers who are not satisfied, but not enough to say so. Fours are provided by those who are either barely satisfied or indifferent. And 5s are given by those who are quite satisfied.

If you discounted all 3s as neutral responses, you may be ignoring a large contingent of dissatisfied customers.

So, what metric is fact? Especially in our definition of a metric, which is made up of multiple data, measures, information, and even at times other metrics. There are enough variables at this level to make any answers way short of "fact." How about at the lowest levels, though? How about data? Can't it be trusted to be factual?

No. Scientists keep finding that things they knew to be a scientific fact yesterday are totally wrong today. Automated data collection systems can easily be miscalibrated and provide erroneous data (my bathroom scale is constant proof of this). When we add people to the equation, the possibility of errors increases.

Technology is great; as it advances, the accuracy of data increases. But even when using current technology, it is critical not to treat metrics as facts. When you give metrics more weight or significance than they deserve, you run the risk of making decisions based solely on the data.

Although base information should never be considered to be entirely factual or without fault, it shouldn't deter you from the proper use of metrics. But hopefully this knowledge will guide you to use metrics as they are intended to be used-as indicators to help support your decisions.

Metrics should never replace common sense or personal involvement.

Misused Metrics: "Our Customers Hate Us"

Let me provide a real life example of how a manager's well-intentioned use of metrics did more harm than good. A team of hard workers were told that they were hated by their customer base. Or at least that's how they interpreted the story shared with them by their boss.

Every two weeks, the CEO would meet with his department heads. For the first 30 minutes of the meeting, they'd review every customer satisfaction rating of a 1 or 2 (out of 5) across the organization. These ratings were labeled highly dissatisfied and dissatisfied.

In reality, all this exercise showed was the number of respondents who chose a 1 or 2 rating. We don't know much more than that.

The comments with each rating, when given, were also scrutinized. Based on these comments, most customers were clearly unhappy, but occasionally it was obvious that the respondent simply picked the wrong rating.

Looking at each case, it was clear to me that most customers only gave a 1 when they were angry, and they always used the opportunity to give a lengthy comment on why they were upset.

After the department heads reviewed the surveys with 1s and 2s, if time allowed, they would look at 3s since the comments provided normally indicated a level of dissatisfaction and pointed toward areas that could be improved.

This review was well-intentioned. The company leaders were, after all, listening to the customer's voice. That's why they administered the surveys and reviewed the negative responses.

For each survey response, the following needed to be explained: Why the low rating was given. If a customer's comments weren't clear (or there were no comments) someone on staff should have contacted the customer for clarification.

What was done to "make it right" with the customer.

What could have been done to avoid the low rating. (A better way of phrasing this would be, "What could we have done to prevent customer dissatisfaction?" A nuance, but important. If we tie our improvements to the measure rather than the behavior or process, we run the risk of improving the numbers without changing the behavior or process.) Unfortunately, the last item-how to improve so that the customers are not dissatisfied in the future-received little attention in these meetings. This isn't uncommon, however. I've seen leaders.h.i.+p demand explanations of why customers were dissatisfied, but the goal should be to improve processes to eliminate repeat occurrences.

So, the department heads did what you might expect. They reviewed the survey results well ahead of the meeting. They identified which teams were the recipients of the ones, twos, and threes. They tasked those teams (through their managers) to: Contact the customer and determine the nature of the problem.

Explain the cause of the poor rating.

Explain what they were going to do to keep from getting that rating again (yes-at this point the manager wasn't using the metric properly).

What the workers heard was: Contact the customer and see if you can appease them.

Figure out who was to blame.

If you couldn't appease them, and you were to blame, what are you going to do about it?

But let's get back to the impact on the team. All they heard from leaders.h.i.+p was that the surveys were highly critical of them-the customers obviously hated them. Since leaders.h.i.+p only shared the lower-rating surveys, the team a.s.sumed that they never received higher customer satisfaction ratings.

The funny thing is that all of the surveys were available to the team, but no one on the team had ever considered reviewing the surveys for himself.

What you say may not be what others hear.

The team believed they were the dregs of the organization due to the following mistakes in handling the customer satisfaction metric: The CEO and department heads (innocently) requested explanations for each poor customer satisfaction rating.

The manager pa.s.sed on this request to the team, without considering the affect it would have on them.

The manager never bothered to review the surveys for his team.

The team never bothered to use the survey reviews for anything other than appeasing the bosses.

Bottom line? The data was only being used by upper management to ensure service quality for the customers. And when the requests for more information came down stream (a good thing in itself), the surveys were taken "out of context" (the team believed they were hated by customers) and no one shared or looked at any of the positive comparison data.

This innocent behavior created stress, low morale, and a misperception of the level of satisfaction the customers had of the team.

After more than a year of this type of interaction, I was tasked with developing a scorecard for the key services in our organization. This team's service was one of our core services, so I visited them to develop their scorecard.

When I offered to include customer satisfaction on the scorecard, I met unexpected resistance. I was not aware of what they'd been going through on a bi-monthly basis.

I knew that customer satisfaction ratings were consistently a strength in the larger organization, and I was sure that this service would be no different. But the team was just as confident that the ratings would be horrendous. They also argued that since each of the customer satisfaction surveys were administered to customers who had had problems (hence the need for the second-level support they provided), the results would be skewed against them.

Again, I tried to a.s.sure them that this was not normal for the organization. And again, speaking from their observations and experience, they a.s.sured me it was going to be ugly.

One of my best moments working with metrics happened when I presented the full metrics on customer satisfaction to this team. It turned out that the ratio of highly satisfied customers compared to those who gave them lower ratings was far higher than the team realized. In fact, the team's customer satisfaction ratings were consistently ten-to-one in favor of good service!

While the team, their manager, and I were very happy about the outcome, it was enlightening to all of us how such a seemingly logical use of a metric could cause so much harm.

The damage to the team's morale was enough to confirm for me the need to be extremely careful with metrics.

Misuse of Metrics: The Good, the Bad and the Ugly.

Respect the power of metrics. This respect should include a healthy fear and awe. By having a small, healthy dose of fear and awe when dealing with metrics, I hope all levels of users will use a little caution in how they let metrics affect their decisions.

Be a.s.sured, used improperly, metrics can seriously endanger your organization's health. Not respecting the power of metrics often results in errors in the way we use them. These errors manifest in forms of misuse.

The Good.

The following misuses of metrics are cla.s.sified as "good" not because they are acceptable, but simply because the perpetrator lacks malicious intent, innocently misusing a metric rather than deliberately causing damage. Sometimes this is due to arrogance and other times ignorance. In either case, a healthy dose of respect would solve this problem.

Sharing only part of the story. Remarkably, after spending time and investing effort to develop a complete story, people still mistakenly share only part of the story. This seems counterintuitive. Whenever you selectively share parts of the metric and not the whole story, you distort the message. Don't create misinformation by simply not sharing the whole story.

Not sharing the story at all. Again, why go through the effort to develop a full story only to h.o.a.rd the results? Not only must you share the metrics with the customers (those who could and should use it), but you should share it with those who are providing you the data. Not all those who use the metric will be a provider of data, but all those who provide data should be users. Another way I see this manifested is reluctance to build the metric at all. It happens because of fears of what the metric will show. Most times I hear that the "data is invalid" or "we can't get the data." Basically, these are excuses designed to kill the metric before it is ever created.

Sharing only good metrics. The most common reason for not showing all the metrics is because, in someone's opinion, something in the metrics makes someone else look bad. Of course, if you're using metrics properly, they are indicators for the purpose of improving. If you only have good results, then what do you need to improve? The reluctance to show unfavorable results misses the point of metrics. To improve, you need to know where improvement is needed. To show progress, you need to be able to show improvement. To show only "good results" is to cheat someone of the information needed to help them improve.

Sharing only bad metrics. There are times when only the negative results are shared. Purposely. For example, when a manager wants to "motivate" his staff, perhaps he may choose to make things look a little worse than they are. We won't go into the more sinister abuses of metrics-I'll leave that to your imagination. Suffice it to say, another misuse of data is to reveal only the negative results.

Showing the data. Remember the difference between data, measures, information, and metrics? Showing data (or measures) means that you distract the viewer from the story. It's like showing the used palette instead of the painting. When you show data (instead of the metric) you invite the viewers to do their own a.n.a.lysis and form their own stories.

You may have noticed a theme to these examples of "good" misuses of metrics. Most are born of not showing the complete story. This supports why the use of root questions and the development of a complete story is so important.

The Bad.

In contrast to the innocent misuse of metrics, the "bad" describes knowledgeable misuse. You would think this would be the rare case. You would hope that those receiving metrics would not knowingly misuse them. But some don't respect the destructive power of metrics; they wield it haphazardly and end up causing serious damage. These types of misuses are as follows: Using metrics for a personal agenda. After seeing the metrics, there are those who decide that the metrics can be used to further their own cause. And there are those who may actually task the creation of metrics for the sole purpose of fulfilling a personal agenda. These people are easy to spot. They refuse to work with you to determine the root question, either out of embarra.s.sment over a transparent desire of a specific answer or from reluctance to share before they can prove their case. These people offer numerous excuses to avoid getting to the root. If you're trying to do metrics right, you'll be extremely frustrated by this abuser.

Using metrics to control people. There is group of professionals who make a living developing, a.n.a.lyzing, and reporting "performance metrics" specifically designed to measure how well people perform. You can also use performance metrics to evaluate processes, systems, and even hardware. From the discussions in online community sites, it is a widely held belief that performance metrics are a good tool for manipulating people's behavior. This is unacceptable. The words "control" or "manipulate" may not be expressly used, but by saying "you can and should drive performance using metrics," this is what is meant.

Using metrics to make decisions. I understand that management wants metrics (they call it data) to base their decisions on so that they are making "informed" decisions. I am not saying this is bad. It only becomes bad when leaders.h.i.+p believes that the metrics are facts. It becomes an issue when decisions are made as a result of the metric. Metrics can be used to inform decisions, but only after they've been investigated and validated. It's not enough to know the what (the metric), if you're going to make decisions based on them; you have to also get to the why. When my gas gauge shows near empty, I make the decision to get gas right away or wait a little while. This decision isn't a critical to anyone. If the gauge shows near empty, it shouldn't hurt if I don't get gas immediately. I know the gas gauge is an indicator and not a fact (depending if I'm going up or down hill, the reading changes). I also know it can potentially be incorrect; if it shows empty when I've just filled the tank, there is something else wrong. Bottom line? I use data, measures, information, and metrics to inform my decisions, not to make them.

Using metrics to win an argument or sway opinion. This is probably the most common misuse of metrics. We see it in politics. We see it in debates. We see it in funding battles across the conference room table. The problem isn't that you use metrics to prove your point, it's that you only use the data that helps your case-and ignore the rest. This is a grievous misuse.

You may have noticed that most of these "bad" misuses are based on "how" they are used-the intention behind the report. If your intentions are bad (selfish, manipulative, controlling, or lazy), you will end up misusing the information. Negative intentions drive you to misuse metrics in the worst ways.

The Ugly.

If the good is a result of non-malicious intention, then the "ugly" is a direct result of malicious intent. I won't spend a lot of time on this, because those who have the intent to misuse metrics probably aren't reading this book.

The reason I'm discussing this at all is just to remind you that there are those who would intentionally use metrics to cause harm.

So, you have to respect the power of metrics. Not only must you ensure you are careful with how you use them, but you have to protect others from the dangers. This is part of the trust you need to build with those who provide the data. Just because you would never purposefully use metrics to hurt others, it doesn't mean others won't. When you take on the responsibility of collecting, a.n.a.lyzing, and reporting metrics, you also have to protect others.

Constant diligence is required to ensure metrics are used properly.

Metrics: How to Improve Key Business Results Part 26

You're reading novel Metrics: How to Improve Key Business Results Part 26 online at LightNovelFree.com. You can use the follow function to bookmark your favorite novel ( Only for registered users ). If you find any errors ( broken links, can't load photos, etc.. ), Please let us know so we can fix it as soon as possible. And when you start a conversation or debate about a certain topic with other people, please do not offend them just because you don't like their opinions.


Metrics: How to Improve Key Business Results Part 26 summary

You're reading Metrics: How to Improve Key Business Results Part 26. This novel has been translated by Updating. Author: Martin Klubeck already has 1054 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

LightNovelFree.com is a most smartest website for reading novel online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to LightNovelFree.com

RECENTLY UPDATED NOVEL