The risk of best practices (Or the balance that must exist between Outcome Accountability and Process Accountability)

By: Jonas Spellman

In performance-based business cultures, executives often adhere to best practices. The risk is that once we have declared a routine as the best one, it will be frozen in time. We preach about its virtues and stop questioning its defects; we are no longer curious about where it is imperfect or where it could improve. Organizational learning should be a continuous activity, but the very concept of best practice seems to indicate that performance has reached an end point.

At NASA, to give an iconic example of the urgent need to have impeccable performance (an error there is -literally- the certain death of the members of the space capsule), until before the Columbia tragedy in 2003, although teams routinely met after training simulations and significant operational events, what stood in the way of questioning and rethinking the way things were done was a culture of performance that held people accountable for results (i.e., outcome accountability). Every time they delayed a scheduled launch, for example, they faced serious public criticism and threats of funding cuts. Every time they celebrated a flight into orbit, they encouraged their engineers to focus on the fact that the launch was a success and put aside the analysis of certain faulty processes that could jeopardize future launches. That left NASA rewarding luck and repeating hidden bad practices, without the ability to rethink what they called acceptable risks. And all this was not for lack of skill; after all, we are talking about rocket scientists (this is rocket science). But as Ellen Ochoa, an astronaut and fellow at the American Institute of Aeronautics and Astronomy, observes: “When it comes to people’s lives, it’s natural to trust the procedures you already have, the ones that have already been shown to work. We had a hard time understanding that this may be the best approach for a critical situation, but that it becomes a high risk if it prevents a thorough post-hoc evaluation.”

Focusing on results can be good for performance in the short term, but it can be a giant hindrance to learning in the long term. Social scientists have found that when people are responsible only for the outcome of an event, they are more likely to continue negative courses of action. The end, in this logic, justifies the means. In other words, praising and rewarding results only is dangerous because it creates an overconfidence in bad practices, which incentivizes people to continue doing things as they always have. It’s not until a high-stakes decision goes terribly wrong that people pause to reexamine their practices. But we shouldn’t have to wait until a space shuttle explodes, and 7 people die, to determine whether or not the course of action was the right one.

Along with accountability for results, we can build process accountability by assessing how carefully different options are considered as people make decisions. A bad process is usually based on superficial thinking; not all the variables are considered or the assumptions on which the essence of the process rests are not questioned. A good process is based on deep thinking and rethinking, allowing people to form and express independent opinions. The scientific research that has been done on this shows that when we have to explain the procedures behind our decisions, in real time, we think more critically and process the possibilities more thoroughly. We began to ask ourselves two uncomfortable but extremely key questions: how does it work? And what are the current weak points of this process?

Imprinting responsibility for the process in an organization may sound like the opposite of providing psychological security, that is, it could be thought that this practice increases the fear that people may have of being evaluated or observed along the way, this being in turn a dissuasive for them to experiment and make mistakes and learn, but it has been shown that this is not the case. Harvard researcher Amy Edmondson explains that when there is psychological safety without responsibility, people tend to stay within their comfort zone; and, conversely, when there is responsibility but not security, people tend to remain silent and remain in a zone of anxiety. Instead, when we combine the two, we create a learning zone. People feel free to experiment and find possible flaws in their experiments and those of their peers in order to improve them. They become a network of growth and challenge.

Sources and revised material:

  • ThinkAgain! (Adam Grant) – Edition 2021
  • Sanchez and D. Dunning. “Overconfidence Among Beginners: Is a Little Learning a Dangerous Thing?” Journal of Personality and Social Psychology 114, no. 1 (2018), 10–28.
  • Internal material of SYNERGOS