The past principles and practices of safety management have served us really well and they will continue to do so, particularly for simple tasks where individuals or small groups are carrying out things where the risks are well-known and not much is changing in the environment. However, what happens when a procedure doesn’t fit the situation? What happens if something unexpected strikes and there’s no rule to cover the situation? What happens if we chose to go outside the procedure to get things done?
Well, the old approach would probably class this as a violation or at best an error, apply punishment to prevent reoccurrence and probably write another procedure to cover that particular gap. The problem is that procedure writing quickly runs out of steam. We hear stories of organisations with volumes of procedures that operators have literally no idea even existed, let alone apply and use on a daily basis.
In the construction industry, trades typically sit under hundreds of different rules, SWMS’s, procedures and permits, yet in practice actively use very few. Why then are accident rates not much higher than would be predicted? If people routinely violate and deviate, should we be seeing much more worse outcomes when it comes to safety performance?
We don’t see this because people are adaptable, people can change, and people revise what they are doing. Unlike most machines, people can improvise amazingly well under conditions of risk and uncertainty. And so rather than suppressing that amazing capability, why don’t we learn from it. Why don’t we embrace it and importantly why don’t we enhance it?
Looking at the present and into the future, we can quickly appreciate that human ingenuity, expertise and flexibility are going to become even more important in the future. The pace of technological innovation outpaces our capacity to write more procedures and train people thoroughly.
Thinking with common sense and intuitiveness
Work systems are becoming so complex, that it’s impossible to proceduralise them thoroughly. Nor is it possible to completely understand them. In fact, they start to contradict each other. In other words, safety systems are not absent of problems and we have to learn how to overcome their flaws.
As human beings, people know when demands are high and can adjust their performance accordingly. When procedures are needed, people can use their brains to figure out how the best way to apply those particular procedures could be, rather than being a comprehensive script that just dumbs things down.
People know when things are about to go wrong, and if supported, can either prevent them from getting worse or recover operations quite quickly. This is the next evolution in safety management. Instead of focusing on work as imagined by developing procedures, SWMSs, rules and prescriptions on mass, we focus on work as done, what actually happens.
By taking the time to understand and appreciate how things are performed and what causes frustration and hence increases risk. We come at safety from the opposite angle or the complimentary side. It treats safety as something we have to establish, we have to build and create. Dr Todd Conklin says “Safety is the presence of positive capacities not the absence of negative events”. In other words, we want organisations to develop capacities to fail safely. Humans are prone to error, so let’s accept it and make sure we have controls in place to allow the failures to occur safely.
Safety being redefined
Safety is now being defined as ensuring that things go right as often as possible. This suggests that rather than things either going wrong or right, and that being due to failures or non-compliances. Normal variability and adjustment causes both success and failure.
For example, every day a mobile plant operator ignores an alarm because his colleagues said it’s faulty. There’s an example of routine variability or going outside the procedures, we might also call this practical drift, another technical term. One day, however, someone else goes outside their procedure and opens the valve usually monitored by the alarm to complete their maintenance tasks. The alarm flashes, again this time for real, and an incident is actually triggered because it is an action. What is happening is everyday variability and adjusting performance, going outside the procedures, actually caused the incident, not necessarily one root cause failure.
Seek to understand what happened, and improve
In this case, simply punishing the original operator for his non-compliance, would actually do very little to improve operational safety. Better would be to conduct an investigation to understand why the variability was there in the first place, and work on improving things, like induction and onboarding, as well as maintenance of the alarm system.
Lots of different actions are available to actually improve the safety of that operation. Better still, would be if the operator spoke up at the time and highlighted the variance from the procedure ahead of the incident. So in organisations that are moving towards the new approach, this would be more likely to occur because there’s the ability to talk openly about problems and failures and variability, without the threat of blame or punishment looming over people’s heads.
This is the next step in safety improvement. Note that we don’t throw away the past approach, not at all, we just reduce it accordingly and use it where things have already gone wrong or we identify something could go wrong.
So in our high-risk, well known and thoroughly understood settings. The new approach becomes a dominant way to manage safety because it concentrates on the ordinary, the every day, the work that gets done, and no one really notices that it gets done successfully. The key is to appreciate, understand and learn from variability.