EHS Today - July 2014
By: Terry L. Mathis
The way we measure safety has contributed to our tendency to manage safety reactively. All our early safety metrics were reactive, i.e. chronologically after accidents or incidents occur. Since our metrics were essentially failure metrics, we fell into a pattern of managing safety to produce fewer failures. The only serious problem with this approach is that it reaches the limits of its effectiveness before it tells us how to prevent all accidents. As we fail less, our failure data diminishes, losing its statistical significance before our performance reaches zero accidents.
This limitation of traditional safety metrics and management has spawned a search for what are commonly called "leading indicators" of safety that will allow us to better predict and prevent accidents before they occur. Although this thinking is going in the right direction, it has not gone far enough. Ultimately, safety will have multiple metrics connected by algorithms which provide truly prescriptive metrics with which to manage safety. This set of multiple metrics will form something similar to the balanced scorecard used by strategic managers. It will have at least four major sets of metrics, the first of which might be called "safety drivers." These are key performance indicators of our major safety efforts designed to improve organizational safety conditions and behaviors. They fall into five major categories: leadership, supervision, conditional control, onboarding practices, and knowledge/skill building.
Leadership is considered a driver of safety and is measured in many organizations with excellent safety performance. Leaders' activities are often the crux of such metrics: the percentage of their official communications that mention safety topics, their reinforcement of safety strategies in regular interactions and performance appraisals with direct reports, their contributions to ongoing safety strategy development, and their drop-in rate on safety meetings and training sessions. Executive-level personnel who do not directly supervise people, such as planners and engineers, are often measured on their consideration of safety in their plans and designs, and their inclusion of workers for input on such plans and designs.
Supervision is often measured in terms of safety coaching. Some organizations measure the amount of safety-coaching training and refresher training supervisors attend. Others also measure the supervisors' efforts to create focus on specific safety-improvement targets. Still others measure the number of supervisor-to-worker contacts that result in safety feedback on performance being given. Some organizations also measure the number of influences on worker behavior the supervisor addresses, such as perceptions about best practices, providing well-spaced reminders to help workers form safety habits, and ensuring the availability of tools and equipment convenient to the worksites.
Conditional control of safety issues are most often measured as the percentage of safe vs. unsafe conditions discovered on periodic audits of the workplace. There are also opportunities to measure the percentage of discovered unsafe conditions actually addressed with action plans and brought to resolution. Some advanced programs measure the discovery of new or previously-undetected risks, or solutions to older ones. Some organizations actually give the conditions scores based on the projected probability that the risk could cause an accident and the potential severity of the accident.
Onboarding practices in safety include selection and screening of potential candidates as well as the initial orientation, formal training and on-the-job training or mentoring new employees receive. The most common metric derived from these practices is simply a completeness score (was the candidate put through all interviewing and onboarding steps in the prescribed order and within designated time frames?). However, many organizations have developed qualitative as well as quantitative metrics, although the former are often more subjective than the latter. Many organizations have made great improvements to onboarding practices when scoring the efforts and comparing them over time to employee safety performance on the job.
Knowledge/skill-building activities can include supervisory safety coaching, but more often focus on training for general and job-specific safety. Safety training can be instructor-led, classroom type training (both in-house and out-sourced), computer-based training (CBT), or on-the-job (OTJ) types of activities. Although many organizations still rely on the Kirkpatric metrics for evaluating training (training evaluation, knowledge gain, transfer to the workplace, and sometimes ROI on training investment), more and more are actually testing for competence in doing the job safely. This is usually a job performance demonstration by the trainee and an evaluation of demonstrated ability by a certified professional in the specific job field. Organizations with goals of excellent safety performance often state that every employee is expected to become a safety expert at his or her job as well as a competent worker.
These measurements of activities designed to drive safety performance are often given weighted scores and combined into an overall score of safety drivers. Most of these are based on a 1-10 or 1-100 scale with the higher numbers reflecting the better scores. Many organizations give ranges of performance a color code and develop a dashboard of each metric to scan overall performance. For example: 90-100 could be green, 80-89 could be yellow, and anything below 79 could be red. A table of these metric titles and their corresponding color provides a focus on problem areas at a glance which could be followed up with improvement discussions and action plans.
It is important to remember that this "safety driver" is not the ultimate nor stand-alone leading indicator of safety. It is simply a metric that tells an organization if it is working its plans to drive safety performance. If the plan is being worked, we need to know if the plan is working, i.e. having the desired results. The answer to that question involves two other sets of leading indicators and their correlation to the lagging indicators. If we drive safety performance, do we significantly change individual and organizational competency, does this competency in our controlled conditional environment produce more excellent performance, and does that performance produce superior lagging indicators? This four-phased approach to a balanced-scorecard for safety has proven to outperform the simplistic linear thinking that a few leading indicators drive the lagging indicators.