Human Reliability Analysis (1993)

Hollnagel, E. (1993) Human reliability analysis: Context and control. London: Academic Press.

Japanese translation 認知システム工学―情況が制御を決定する (1996). Tokyo, Japan: Kaibundo Publishing Co. Ltd.

Prolegomenon


1. Reader's Guide.

The purpose of this introduction book is to provide the reader with a survey of the topics that are treated in the book, as well as some supplementary information about the book itself. The purpose of these first paragraphs is to provide the reader with a guide to the introduction itself.
The first section presents the purpose of the book as well as the rationale for writing it. It also provides some advice about who should read the book and who should not. This section should therefore be read by all, even the casual browser in the bookstore.
The second section briefly goes through the book in a chapter-by-chapter fashion. Readers who have not been put completely off by the first section are encouraged to read the second section. It will enable them to decide which chapters of the book they should concentrate on and in which order.
The third and last section provides miscellaneous information and comments. Readers, whose curiosity is aroused by the headings should read the associated text at some time, although not necessarily before starting on the main chapters of the book
2. Rationale.

In the beginning of the 1990s the field of human reliability analysis (HRA) was in a state where there was pronounced dissatisfaction with the available methods, theories, and models, but where there as yet were no clear alternatives (Dougherty, 1990). The intention with this book is to present such an alternative, based on the principles of cognitive systems engineering.
Throughout the 1980s there was a growing recognition in the engineering world of the role of human cognition in shaping human action -- both when it led to accidents and when it prevented them. This recognition was not felt in human reliability analysis alone, but also in the concern with man-machine systems in general, with decision support systems, with human-computer interaction etc. One consequence was that "cognitive" and "cognition" became fashionable terms for almost all aspects of man-machine interaction. As an example, the book about "Accident Sequence Modelling" by Apostolakis et al. (1988) has the following main entries:

(1) cognitive activity,
(2) cognitive competencies,
(3) cognitive environment simulation,
(4) cognitive modelling,
(5) cognitive primitives,
(6) cognitive processing,
(7) cognitive reliability analysis technique
(8) cognitive structures,
(9) cognitive sub-elements, and
(10) cognitive under-specification

In many cases, however, the allusion to cognition was a matter of convenience rather than a real change in orientation. Cognition, however, is of fundamental importance and it is consequently necessary to have adequate methods, theories, and models to address properly the role of cognition in human action -- and particularly specific issues such as the Reliability Of Cognition.
The study of human cognition has developed from experimental psychology in the 1960s and gradually grown into several distinct directions (it would probably be going too far to call them scientific disciplines). Some of these focus on basic research issues while others venture into what for academia is the terra incognita of applications; among the latter are cognitive science, cognitive systems engineering, and cognitive ergonomics.
Cognitive systems engineering (Hollnagel & Woods, 1983) is based on the principle that human behaviour -- in work contexts and otherwise -- should be described in terms of joint or interacting cognitive systems.[1] A joint system where one of the parts is a cognitive system is also in itself a cognitive system. Hence all man-machine systems are by definition cognitive systems. In the classical view on man-machine systems, one could consider the man (= the operator) by himself, the machine (= the process) by itself, and add the interaction between the two. This view, however, misses the notion of integration and dependency and -- in particular -- that all activities take place in a context.
Cognitive systems engineering is obviously not the only way to look at human cognition and it cannot be proved that it is the correct way. It is, however, a usable basis for describing human cognition in the context of human work, i.e., it is pragmatically correct. The specific developments described in this book are focussed on the notion of how actions are controlled and on how control and reliability are related.
2.1 Credo:

Better analyses of the reliability of cognition are needed for practical reasons alone. Current approaches to HRA are based on the principle of describing situations in terms of appropriate components or elementary events, e.g. as single actions. This principle of decomposition is basically a consequence of the underlying view of the human operator as a machine -- possible a complex, cognitive machine, but a machine nevertheless.
Such approaches are, however, inadequate as a way of describing human cognition because they are not based on a clear theory of human cognition -- or even on a clearly formulated description of what human cognition is. A proper analysis or assessment of human reliability must not only acknowledge the role of cognition, but also include a theory or description of human cognition and of the reliability of cognition.
Any such model -- even a very simple model of cognition -- will show that cognition must be considered as a whole and as an integrated activity that reveals itself in a context, rather than as a decomposable ordering of elementary functions and bits of knowledge. Any assessment method must start by recognizing this fact and strive to derive a description which does not conflict with that.
An alternative approach to human reliability analysis may make it less straightforward -- but also less necessary -- to provide point estimates or point probabilities of individual actions. It will, however, improve the qualitative basis for developing solutions that consider the system as a whole and which therefore contribute to the overall goal of reducing the number of unwanted consequences. An alternative approach will also make it easier to assess the overall risk or reliability of a work situation in a meaningful way.
On the other hand it will also reduce the need to collect data (estimates) for minute aspects of human performance, since such data no longer will be very important. Instead data must be sought on the level of cognitive ensembles, i.e., the practically meaningful segments of work.
2.2 The Root Cause:

Risk and reliability analyses are often made on the basis of descriptions that use trees as an underlying structure: operator action trees, event trees, cause-consequence trees, etc. Since every tree has one root -- at least in the simplified graphical representations that commonly are used -- the notion of a root cause has become widespread. The root cause, of course, means the single, identifiable cause for an observed consequence, even though most practical cases show that there rarely is only one cause.
In the case of this book the root cause was a special issue of the Journal of Reliability Engineering and System Safety that dealt with the problems of HRA and the unhappy state of the art. The basis for the special issue was a position paper by Ed Dougherty (1990), which was followed by a number of comments (some short, some long, some agreeing and some disagreeing) from people who, in one way or another, either had experienced the problem or had an opinion on it.
I am sure that there are even more opinions than were expressed in the special issue. In fact, I was asked to contribute a comment and started to write down my views but did not finish them in time for the special issue. As luck would have it, another opportunity came at the International Conference on Probabilistic Safety Assessment and Management (PSAM), which was held in Beverly Hills , February 4-7, 1991. For this occasion I elaborated on my unfinished comments and presented them as a paper entitled "What Is a Man That He Can Be Expressed by a Number?" That paper in turn became the starting point for this book, which can be seen as a elaboration and extension of the main theme of that paper, i.e., a long argument against viewing and describing humans in terms of numbers -- whether as reliability measures or something else.
Although the special issue of Reliability Engineering and System Safety mentioned above can be seen as a root cause for this book, it is certainly not the only cause. The paper by Dougherty (1990) merely expressed the concerns that many HRA practitioners had. In addition, psychologists and others had generally criticised the approach to quantitative modelling that HRA practitioners had taken. In his editorial Apostolakis (1990) rather bluntly expressed it thus: "... researchers who try to understand human behavior and to develop models for the operators have a very negative view toward the use of such quantitative models, whose foundations they consider to be unacceptable." This critical view can be found in practically all of the books and papers published during the 1980s that looked at "human error" from the behavioural or social sciences point of view (e.g. Perrow, 1984; Rasmussen et al., 1987; Reason, 1990; and Senders & Moray, 1991). It is a criticism which is amplified by the general view of cognitive systems engineering and cognitive ergonomics, as described above. The real "root cause" for this book is therefore an assortment of views and issues that gradually were developed during the 1980s by the international community of people concerned with the study of human cognition.
2.3 Who Should Read This Book:

I have written this book with a certain audience in mind. The audience is not defined in terms of lines of profession but rather in terms of specific interests or views on man-machine systems and human performance. In other words, there is a certain audience that I hope will find the book congenial. This audience includes:

  • The HRA practitioners who have found the current approaches, models, and methods lacking in one way or another.
  • The scientists and researchers who adhere to what can generally be called the cognitive viewpoint, i.e., who find that human cognition plays an essential role in analysing and understanding human performance.
  • The specialists and engineers who are practically involved with the design, management, or use of man-machine systems in all fields and who are uneasy about the impact of human performance (the human factor) on system performance.
  • Those people who have an interest in the practical study of human behaviour and human cognition, and who are genuinely interested in or concerned about human performance in working situations.

2.4 ... And Who Should Not!

Just as there is an intended audience, there are also several groups of people who I expect will find this book rather disagreeable, and who therefore are advised not to read it unless they want to see their views challenged. These people include:

  • The practitioners and risk analysts who perform human reliability analysis and who are perfectly happy with the current approaches.
  • The scientists and researchers who firmly believe that the study of human cognition can only be carried out with well-controlled experiments and rigorous quantitative / statistical methods. This also includes those who believe that computational models or information processing descriptions can provide perfectly adequate explanations for human performance.
  • The specialists and engineers who cannot understand why some people have misgivings about quantifying probabilities for human errors and why these people therefore are reluctant to provide such numbers.
  • Those people who think that "human error" is a perfectly good root cause, and that the solution to the problem of "human error" basically is to increase the level of automation.

Any readers who feel that they do not belong to either of these groups, for instance because they are not interested in this field at all, should probably decide for themselves whether they want to go on reading. I expect, however, that they will find this book rather boring.