Common Practice: The Third Level of Leading Indicators

EHS Today - September 2014
By: Terry L. Mathis
Printable Version

In my last two articles, I described the first two levels of leading indicators that might comprise a balanced scorecard for safety. The first level indicators are measurements of safety drivers: leadership, supervision, training, meetings and other activities designed to make safety happen. The second level indicators are the measurements of how these drivers impact the safety culture, notably through common definition, perceptions and competencies. This article will focus on the last of three levels: safety performance as demonstrated by common practice. Some have suggested that performance is a lagging indicator, but I would argue that it is the results of performance (i.e. recordable and severity rates and the costs of accidents, etc.) that are the lagging indicators.

Some scholarly works on culture have confused the concept of common practice. Suggestions that culture is "the way we do things around here" or "what people do when you are not looking" tend to confuse culture with common practice. There is no doubt that common practice is an indicator and result of culture, but the two are not synonymous. Culture is a complex set of shared perceptions, values, and other factors that shape common practice. Since these factors of culture are invisible, and common practice is observable, many have merged the two in their thinking. But it is crucial that we not confuse this artifact of culture with culture itself.

Also, many have suggested that a perception survey alone is an adequate measurement of culture. There is no denying that perceptions are a part of culture, but certainly not all of it. The purveyors of perception surveys have convinced some that they are the ultimate leading indicator. While they have undeniable value, they are certainly not stand-alone metrics that can totally enable proactive safety management.

Some of these common misconceptions have resulted from the difficulty in measuring common practice. The struggle to find such a metric was temporarily derailed by behavior-based safety. The idea of workplace observations seemed to be the potential answer to measuring work practices. Defining safety performance in terms of observable worker behaviors and measuring them was appealing. Many tried various observation processes with good results, and the practice continues to spread to new organizations, industries and countries, even today.

Observation processes have undergone many revisions over the years. Early practices targeted at-risk behaviors and tried to eliminate them. Later practices sought to observe safe behaviors (or precautions) and then tried to reinforce workers to form the habit of using them. Many processes moved from long observation checklists of behaviors to shorter, more focused ones. The style of observations evolved from confrontation to coaching. Many processes compromised the validity of the metrics by announcing observations or asking permission to observe in order to better enable the desired behavioral change. Organizations that implemented behavior-based safety early often stayed with their original process and missed many of these advances.

Unfortunately, some behavior-based safety consultants stress the value of the interaction between observer and worker, which can actually minimize or eliminate the metrics created by the observations. Others fail to truly analyze and understand what observations metrics reveal. The most successful use of metrics have two commonalities: 1.) Using a sampling strategy for observations that enable the use of data trending and 2.) Capturing the influences on behavior by asking the observed worker why they did what they did. Data that is measured a different way every month cannot be trended. Yet, it is the trends of this data that prove to be the true indicator of whether or not the process is producing positive behavioral change. The data on the influences on behavior enables organizations to develop meaningful action plans to facilitate the desired behavioral changes. The idea came late to behavior-based safety that people do things for a reason and that changing the reason is the surest and most sustainable way to change the behavior.

For organizations who are averse to full behavior-based safety processes, the concept of observations may still be viable. Many have separated the interaction from the observation and utilized a "walk-through" auditing approach to measure common practice. This is sometimes referred to as a SWEEP (Seeing Without Explaining to Every Person) observation. Again, the beneficial use of these observation/audits is to utilize a uniform sampling strategy so the data can be trended.

In our recent book, Shawn Galloway and I suggested that organizations should target specific safety improvements and track them through the metrics of a balanced scorecard for safety. For example, one organization targeted a specific housekeeping improvement. They had leaders announce the targeted improvement and explain the rationale for pursuing it. In scorecard phase one, they created a training module to address the efforts needed, asked supervisors to mention it in meetings for the next two months, asked leaders to continue to communicate about it regularly, and put up posters about the goals. In scorecard phase 2, they added items to their perception survey about the improvement target and tracked the increase in knowledge and acceptance of the process. In scorecard phase 3, they asked observers/auditors to look for workers either working on the goal or neglecting it and tracked the percent of participation. In scorecard phase 4 (lagging indicators) they tracked the specific accidents and near-miss rates that originally prompted the improvement target to ensure the program was impacting them.

A balanced scorecard for safety can assist organizations to take a more strategic approach to safety and practice a more proactive safety management style. Rather than the old cycle of avoiding failure, in which success is measured strictly by lagging indicators, targeted improvements can be measured at each level. Drivers of the improvement can be measured to see if they are operating as intended. Culture can be measured to see if the drivers are changing the shared values, competencies, and mindsets of workers. The common practice element of performance can be measured to see if drivers and culture are actually changing behaviors. And, lastly, lagging indicators can be utilized to determine if the effort is truly producing the desired results.

Subscribe to our newsletter