Copyright © Erik Hollnagel 2022
All Rights Reserved.
Erik Hollnagel
Professor, Ph.D.
Professor Emeritus
LiU (S), ENSMP (F), SDU (DK)
The chief motive of all human actions is the desire to avoid anxiety.
Ibn Hazm (994-1064)
A semi-chronological list of (good?) ideas that I have had the opportunity - or good luck - to work on and/or contribute to.
Cognitive Systems Engineering (CSE). CSE was the result of my first collaboration with David Woods, which has been going on for 40 years. CSE can now be seen as a flawed attempt to a universal paradigm for humna-machine systems, but some parts of CSE are still higly relevant today.
Goals-Means Task Analysis (GMTA). In retrospect this is probably a kind of precursor to FRAM in the sense that it describes a principle for systematically identifying functions and their prerequisites.
Phenotypes and genotypes (of erroneous actions). This was an early (and futile) attempt to get rid of "human error" as a concept - and certainly as a cause. It resulted in a couple of papers in the early 1990s. More important is, however, an unpublished position paper for a NATO ASI on Human Error (1983). The book from the NATO ASI wrote this about the position paper. Much has happened since then, as nicely summarized here: https://doi.org/10.1080/00140139.2021.1953615
Sic transit ...
CREAM (Cognitive Reliability and Error Analysis Method) - in retrospect clearly a big mistake!
COCOM (Contextual Control Model)
ECOM (Extended Control Model)
Resilience Engineering (with David Woods and others)
The ETTO Principle
FRAM (the Functional Resonance Analysis Method)
The purpose of FRAM, as a method, is not to find the cause/causes for something that has happened, but instead to develop or build a model of how something happened / happens / or could happen. This can then be used to understand why something happened.
Systemic Potentials Management (SPM) - formerly called the Resilience Assessment Grid (RAG)
Resilient Health Care Net - which now has become the Resilient Health Care Society
Safety-I and Safety-II
A number of people were unable to resist the temptation of Safety-I and Safety-II to conclude that there would also be a Safety-III.This is, however sheer folly as argued by this little note.
The safety legacy comprises the widely and uncritically accepted assumptions about how something happens and how events that lead to unacceptable outcomes develop. The safety legacy are in short the tacit assumptions that determine how we perceive and interpret the occurrence of unacceptable outcomes, and therefore also determine how people respond to them -- what they do about them. The safety legacy is derived from the Heinrich dogma. The common and nearly instinctive response to unacceptable outcomes, meaning something that is harmful, costly or in other ways affects a person' life and activities in an unwanted way, is not surprisingly to take steps to prevent it, to limit it when it happens, and to try to prevent it from happening again in the future (but the latter requires assumptions about how the future develops-- what determines future developments and events. In this respect there are clear differences between the different stages of safety thinking. Since Safety-I is based on a gradual reduction of the number of unacceptable outcomes (such as incidents and accidents) it is basically a decremental approach, and the attitudes that conform to that may be called a decremental safety culture. The reduction has to be gradual or evolutionary, since an abrupt or revolutionary reduction might easily disrupt establish habits and routines, and therefore be counterproductive.
The accepted belief that safety is best achieved by eliminating or preventing acts that go wrong or otherwise fail, has prevented us from recognizing the alternative, which derives from the simple fact that an act cannot go well and fail at the same time (this is only possible in quantum physics, and we are fortunately far from quantum safety.An alternative way to achieve the state of safety is therefor to increase the number of acts that go well and lead to acceptable outcomes. This can be done by encouraging, facilitating and supporting work that goes well-- in other words an incremental approach to safety, and a corresponding incremental safety culture.
An incremental approach must, of course, be gradual, just as a decremental approach, and for the same reason.The ultimate result of a decremental safety culture is the utterly unattainable ideal of zero accidents, commonly known as the zero accident vision (ZAV). Although psychologically attractive it is impossible to achieve in practice. The ultimate result of an incremental safety culture is the likewise unattainable ideal of everything going well, called Visio centum (= 100%) which like the ZAV also is impossible to attain in practice because the future is uncertain.
Safety Synthesis
Synesis
Half-baked ideas are a kind of memory dump. These are things that I have started to work on but then either suspended due to a lack of time or because they no longer seem relevant.