Measuring Safety Culture: Why Perception Surveys Are Not Enough

EHS Today - April 2009
By: Terry Mathis, ProAct Safety
Printable Version

There is a basic flaw in the way companies often measure and manage safety cultures.

Several years ago, I attended a meeting at the invitation of a major petrochemical company. I had known the spokesman for the organization for many years and worked with him on several occasions. He presented his problem to me in very personal terms: “Terry, a recent survey indicates that our workers don't think our managers are serious about safety. If we hire you to help fix this problem, what will be your first step?”

I responded that my first step would be to find out if the perception was accurate or not. The group looked a bit shocked and asked if I was suggesting that it was accurate. I explained that the scope of the issue depended on the accuracy or inaccuracy of the perception. If the perception was inaccurate, I would only need to fix the perception. If the perception was accurate, there would almost surely be a larger problem to solve.

This story illustrates a basic flaw in the way we often measure and manage safety cultures. Perceptions can be inaccurate. Organizations that measure perceptions and react to their findings, without further information, may be chasing a specter. Perceptions alone are ungrounded metrics without a proper frame of reference, from which appropriate actions can be determined. Measuring perceptions only determines if those polled have a perception and if the perceptions of the group are similar or dissimilar.

PROBLEMS WITH PERCEPTIONS

Some try to ground perception metrics by comparing them to the perceptions of others who have taken the same survey. This only serves to give a comparative score to the organizational or site metric in relation to the rest of the group of those polled. Often the group has no real definition. It is simply “everyone who has taken this survey.” You still don't know if the group represents safety excellence, mediocrity or totally poor performance.

How, exactly, do you respond to this placement information? Will improving your perception placement score from 40 percent to 70 percent translate into improved safety performance? Should any 40 percent score take priority over any higher score? Does a variation in perceptions across sites or departments represent a difference in practice or simply a difference in perception of performance? An old axiom of comparative metrics is that 49 percent of every group is below average.

To understand the limitation of this type of metric, imagine a group of people in a lifeboat at sea on a cloudy day. There are no landmarks in sight and no one can see the sun, moon or stars. Someone in the boat shouts out, “Which way is north?”

The response of those in the boat will provide a certain type of information. You will find out if anyone thinks they know which way is north. You also will find out what percent of the group agrees or disagrees. What you don't know from this “perception survey” is whether any or all of the groups' perceptions are correct.

Another problem with measuring perceptions is the volatility of what you are measuring. Perceptions drastically can be changed by events and the flow of information. Ask everyone if they think this is a safe place to work right after you have announced layoffs.

Fail to publish or utilize the results of the last perception survey and see what kind of barrier you have created when you administer the next one. Measure too often or too seldom and you will impact answers. One site I worked with recently had their perceptions audited so often they had become numb to the process. I referred to their condition as “auditism.”

These problems do not mean that perceptions should not be measured or that such measures are hopelessly flawed. It simply means that you need more information to put the perceptions into a manageable context. I call this process “grounding” the metrics.

Grounded metrics are multiple metrics that give each other context. For example, do perceptions of risk match accident data on which risk most often results in accidents? These kinds of data sets put each other in the proper context to understand if the perceptions are accurate or inaccurate.

COMMUNICATION

I have been asked to solve numerous problems that were simple issues of miscommunication or lack of formal communication. Perceptions especially are vulnerable to communication or the lack of communication. Therefore, it is critical that all perception surveys have an “uncertain” response and that percent uncertain be calculated. A high percent uncertain on any perception potentially can indicate a lack of communication on that issue.

A Harvard Business School researcher recently conducted a study in which he asked managers how often they communicated on the subject of safety. Managers at the site being studied rated themselves high in their level of communication on safety.

The researcher then asked managers to keep a log of communication on safety for 90 days in which they recorded the number of times they talked to subordinates and the number of times their communication included the topic of safety. After the study, managers lowered their self ratings by over 60 percent. Many organizations do not view communications as a means of managing perceptions; but it often is. Perceptions that are not managed will vary depending on individual experience and levels of communication. The management of perceptions may be the new frontier of safety cultural management.

One of the leading edges of safety culture management is the merging of safety metrics into something resembling the strategic management model of the balanced scorecard. Strategic managers realized the limitations of lagging indicators and started exploring the alternatives under the tutelage of Kaplin and Norton. They developed three other groupings of metrics that centered on the organizational mission statement and helped managers to see the reality of the business (what Deming called “profound knowledge”) and manage more efficiently.

Several companies and organizations are engaged in similar efforts in safety and have some interesting commonalities. Safety, like strategic or economic management, has lagging indicators. We normally measure recordable rates, severity rates, costs of leading safety-related expenses, etc. Additionally, these organizations are measuring safety processes such as training, meetings, new-employee orientations, etc., to see if they are going according to plan.

The second metric is perceptions: Are these efforts affecting how workers think about safety? The third metric is behaviors: Do these efforts and resulting changes in perceptions impact what people do in the workplace (common practice)? These three impact the fourth, which is results, or lagging indicators. This multiple metric for safety has turned into a digital dashboard for many organizations. The final step will be to develop the algorithms that explain how these multiple metrics impact each other.

It may be possible in the near future to measure how a new training program impacts perceptions, which impacts behaviors, which impacts accident rates. It one day may be possible to predict in advance these results and the return on investment of such projects. Until then, we must realize the limitations of perception surveys and strive to ground them in the realities of the safety culture to better understand how such data can be used effectively to improve the cultures we are trying to measure.








Subscribe to our newsletter