Copyright © Erik Hollnagel 2022
All Rights Reserved.
Erik Hollnagel
Professor, Ph.D.
Professor Emeritus
LiU (S), ENSMP (F), SDU (DK)
The chief motive of all human actions is the desire to avoid anxiety.
Ibn Hazm (994-1064)
The NO view of 'human error'
'Human error' has been the focus of much debate and many arguments for nearly 50 years. The term came to the fore during the human factors surge that followed the accident at Three Mile Island in 1979. In the rush to use 'human error' to explain accidents and incidents it was generally overlooked that the term can be used in different ways. The first way is that 'human error' denotes the cause of something; the second that it denotes the event or action itself; and the third that it denotes the outcome of an action.
Even though the common underlying sense is that a ‘human error’ refers to an incorrectly performed human action as the cause of an unwanted outcome, it is clearly not very practical if the term has several different meanings. It is an obstacle for effective and precise communication, it is a problem for measurements and statistics since it leaves uncertain what is actually being counted, and it is a hindrance for learning since the preoccupation with finding a 'human error' means that the search for alternative – and potentially more effective – explanations is disregarded.
The 'old view' of 'human error' was based on the convenient but false assumption that humans can be described and understood as machines - specifically as information processing machines or systems. This made it possible to talk about 'human error mechanisms' and to model them. (It also made it legitimate to try to estimate the probability of a 'human error', as done by Human Reliability Assessment or HRA.) While adopting terms such as 'cognitive error' and referring to cognitive processing and cognitive reliability (mea culpa!) softened the analogy a bit, it did not overcome the basic limitations of this kind of thinking.
The 'old view' also puts the blame on the individuals ('operators') who happen to be present at the wrong time and wrong place. There are three main reasons for this:
The 'new' view softened the position a bit by recognising, that incorrect human actions mostly are a symptom of deeper trouble within a system or workplace. Incorrect human actions at the sharp end could be due to 'error forcing' conditions, often the result of uninformed decisions by the blunt end. The 'new' view, however, maintained 'human error' as a meaningful category and therefore sustained the belief that there are clearly distinguishable categories of causes as well as of consequences. Even when authors, such as Jens Rasmussen and James Reason, emphasised the utility of making ‘errors’, they retained a distinction between ‘normal’ performance and ‘errors’.
The only reasonable alternative to 'human error', whether in the 'old view' or the 'new view', is a 'no view'. The 'no view' simply says that 'human error' is not a meaningful category and that we therefore should stop using it. The argument is that all human activity - individually and/or collectively - is variable in the sense that it is adjusted to the conditions. The variability is therefore a strength, indeed a necessity, rather than a liability. Resilience engineering has made this clear by emphasising that failures are the flip side of successes. Possible the first detailed presentation of this position is the 'Position Paper on Human Error' from 1983 that can be found here.
By acknowledging that performance always varies and never is flawless, the need of a separate category for ‘human error’ evaporates. It is, indeed, a Rylean category mistake, since it ascribes a property to the mind that is meaningful only for technological systems, namely the property of an incorrect process. We still need, of course, to account for the variability of human performance. One example of that is provided by The ETTO Principle. Since the variability furthermore is not random, we use the regularities (and irregularities) of human and organisational performance to understand both why things go well and why they go wrong.