Saturday, November 6, 2010


Was reading up on high reliability organizations (HROs)...

These are "... organizations with systems in place that are exceptionally consistent in accomplishing their goals and avoiding potentially catastrophic errors. The industries first to embrace HRO concepts were those in which past failures had led to catastrophic consequences: airplane crashes, nuclear reactor meltdowns, and other such disasters. These industries found it essential to identify weak danger signals and to respond to these signals strongly so that system functioning could be maintained and disasters could be avoided..." (source).

"... key characteristics of HROs. These include organizational factors (i.e., rewards and systems that recognize costs of failures and benefits of reliability), managerial factors (i.e., communicate the big picture), and adaptive factors (i.e., become a learning organization) (Grabrowski & Roberts, 2000). More specifically, HROs actively seek to know what they don't know, design systems to make available all knowledge that relates to a problem to everyone in the organization, learn in a quick and efficient manner, aggressively avoid organizational hubris, train organizational staff to recognize and respond to system abnormalities, empower staff to act, and design redundant systems to catch problems early (Roberts and Bea, 2001). In other words, an HRO expects its organization and its sub-systems will fail and works very hard to avoid failure while preparing for the inevitable so that they can minimize the impact of failure..." (source)

Research has identified five main characteristics of HROs, that they use to ensure "mindfulness" and to avoid failures (source):
  • A preoccupation with failure
  • Reluctance to simplify interpretations
  • Sensitivity to operations
  • Commitment to resilience
  • Deference to expertise
So, such organizations focus on their failures and use them to improve their systems. 'Near misses' are considered as opportunities to change and improve the systems, and not as evidence that the systems were sufficiently robust (even though they may have caught the problem and prevented a catastrophic failure); they avoid oversimplified explanations for how things work (as this may risk failing to understand all the ways in which a system might conceivably fail); they recognize the complexity of the working environment and practice "situational awareness" to ensure that anomalies, failures, and problems are identified immediately before they can result in serious consequences; they assume that unanticipated breakdowns will occur, and proactively prepare for them by training staff in effective teamwork and by practicing responses to possible system failures; and finally they believe that when failures occur the team should defer to the person with the greatest expertise in the appropriate area (as opposed to responding in a hierarchical manner)...

All good. However, this blogger has a concern re the term 'deference.' The stated characteristic is "Deference to expertise".... True, this does say that the team should take its lead from the person with the most knowledge of the issue at hand, and not from the person at the top of the hierarchy. It implies that all staff feel free to speak up and share information, etc.

Deference: respectful submission or yielding to the judgment, opinion, will, etc of another (per Synonyms: acquiescence, capitulation, complaisance, condescension, docility, obeisance, submission, yielding...

However, there often can be a correspondence between one's expertise/knowledge and one's position in an organization's hierarchy (oft times it is one's knowledge, expertise, and success that gets one promoted, even if the link between position and expertise and topical knowledge then can get greatly attenuated over time), and in common parlance it would seem that deference is most often perceived to be something that occurs in relation to some hierarchy, be it organizational or societal... Given this, this blogger would be more comfortable with the use of an alternate term which would lessen the risk of muddying the waters and potentially detracting from a full understanding of the very important underlying principle!

Some links:
Wikipedia: high reliability organization
The San Bernadino Group: High Reliability Organizing
5 Habits of Highly Reliable Organizations
Must accidents happen? Lessons from high-reliability organizations
Transforming Hospitals Into High Reliability Organizations
The Better the Team, The Safer the World: Golden rules of group interaction in high risk environments

Of course, this may just be an example of this blogger cavilling ("to raise irritating and trivial objections; find fault with unnecessarily....")

No comments:

Post a Comment