Human and Organizational Performance: Building Systems Around Individual to Reduce an Error's Impact

This article is based on a presentation by Ben Ferguson, CSP, director-human and organizational performance, Cargill, Inc., Iowa City, IA, given July 13 at the CONVEY ‘21 conference in Omaha, NE. To view a recording of the presentation at no cost, go to

Human and organizational performance (HOP) is a science-based approach to looking at mistakes and incidents to see how they can be addressed more effectively. It creates an understanding of how humans perform and how we can build systems that are safer and more error tolerant.

Cargill began looking into HOP years ago because we were dissatisfied with fatality and serious injury performance. When we looked back at our general safety metrics, we were meeting or exceeding industry averages, but we were below average in fatalities and serious injuries.

So our first step was evaluating peer companies to see what they were doing to reduce fatalities and serious injuries. Cargill determined that HOP was key for the companies we evaluated.

Systems Model

When we started learning about HOP, we used the systems model depicted on p. 29 to help us visualize and hone our focus. HOP is about how humans interact with the systems (e.g., people, programs, processes, work environment, organization, and equipment) around them to create success or failure. These systems are interrelated, so when there is variability in an area, it likely will affect other areas. For example, if I have an equipment malfunction, I know it will affect the processes that support that equipment. It also affects the people at the facility, because it is an unplanned problem that needs fixing.

The person in the center of the graphic is important, as well. He or she is the person with their hands on the controls, so to speak. If they show up and have a bad day at work, the systems around them can suffer. Likewise, they can be influenced by the systems. The unique capability that the person provides is adaptive capacity, which allows them to adjust to small and large changes in the systems. This skill is how good work gets done every day, but once a person exceeds their adaptive capacity, safety risks can arise.

Human Error

Human error is an action or inaction that meets one or more of the following criteria:

• It unintentionally results in an undesirable or unwanted condition.

• It leads a task outside of limits.

• It deviates from a rule, standard, or expectation.

According to incident report data, 80% of workplace injuries result from human error, and 20% are caused by machine and equipment failure.

The causes of human error were found to be 70% attributable to organizational weaknesses and 30% to individual mistakes.

Humans are error-making machines, but they also are experts at identifying that an error has occurred and correcting the issue before something bad happens. c

Challenges. One of the difficulties in regard to human error is that an error is easy to detect in retrospect but difficult to detect in context. After an event has occurred, it is easy to find the actions that contributed to the outcome. In the context where these situations occur, they seem like normal reactions to the current situation. People are fallible but optimistic.

Many people also have a bias that causes them to equate error to moral failing. This way of thinking limits human learning and can erode trust.

Building around error. Systems should be built with the understanding that human error is inevitable. Systems should be able to tolerate and recover from errors. However, there may be errors that could lead to life-altering or life-ending events. We need to design systems that make it easy to do the right thing and harder to do the wrong thing, since humans like to take the path of least resistance.

We must value near misses as significant learning opportunities. After a near miss, look for what went well and what did not go well in regard to systems response. This will tell you if your systems are good, or you just got lucky this time.

Serious Injuries and Fatalities

Cargill focuses heavily on serious injuries and fatality (SIF) prevention. We have identified 12 LIFEsaver areas that have an increased potential for an SIF exposure or SIF event:

• Bulk material handling.

• Confined space.

• Electrical work.

• Excavation work.

• Hazardous materials.

• Hot work.

• Lifting and rigging.

• Lockout/tagout.

• Mobile powered equipment.

• Motor vehicle traffic safety.

• Railcar safety.

• Working at height.

Applying HOP thinking and methods helps to identify systemic drivers and weaknesses, as well as single-point vulnerabilities, which means if one thing changes, the whole system fails. HOP thinking also helps to verify the presence and capacity of the controls that keep workers safe. This shows how well your company supports its employees and contractors performing high-risk work.

Final Thoughts

People base decisions on their understanding of the context around them and the objectives to be achieved. If we cannot make sense of someone’s decision, we likely do not understand the context influencing them. We cannot manage what we do not understand.

“People do what they do, at the time they do it, for reasons that make sense to them at that time.” -Sidney Dekker

Tucker Scharfenberg, managing editor

From the September/October 2021 GRAIN JOURNAL