Hollnagel, E., Mancini, G. & Woods, D. D. (Eds.) (1986) Intelligent decision support in process environments. Berlin: Springer Verlag.
The increasing complexity of technological systems has shifted the demands on human performance from a mechanical/physical level to a cognitive level. The role of the human in complex systems is to act as a supervisor of automated or semi automated resources with goal setting, problem solving, and decision making as the primary tasks. While the negative consequences of possible malfunctioning have grown, the tolerances for performance variability have been reduced and the demands to accurate and safe performance increased, thereby leaving the human operator in a very difficult position. At the same time, advances in computer science (e.g. automated decision makers, object oriented programming, and expert systems) have provided new possibilities to support human performance that challenge established man-machine system design principles.
G. Mancini, D. D. Woods & E. Hollnagel, (Eds.) (1988) Cognitive engineering in complex dynamic worlds. London: Harcourt Brace Jovanovich.
The contents of this book address the problem of how we should deploy the power available through developments in computational technology (e.g., object oriented programming, experts systems, natural language interfaces) to assist human performance in complex problem solving worlds, i.e., cognitive engineering. Effective use of computational power depends on understanding the problems to be solved and how people solve and fail to solve these problems. This understanding will lead to principle driven, rather than technology only driven, development of computational possibilities for intelligent and effective decision support, and can affect the very nature of the computational tools that are needed (e.g., techniques for reasoning about uncertainty; semantic reasoning engines).
Hollnagel, E. (Ed.) (1989) The reliability of expert systems. Chichester: Ellis Horwood Limited.
This book is the result of a one-day seminar on 'Safety and Risks in the Use of Expert Systems' which was held in Copenhagen on May 19th, 1988. The idea to have a seminar about this topic evolved from the discussions in the 'Sub-committee for Computer Systems' working under the auspices of the 'Committee on Risk Assessment'. The 'Committee on Risk Assessment' was appointed by a resolution in the Council of the Danish Academy of Technical Sciences in February 1985 and started its work in June the same year - so far for a period of three plus one years - with the following terms of reference:
Hollnagel, E. (1993) Human reliability analysis: Context and control. London: Academic Press.
Japanese translation 認知システム工学―情況が制御を決定する (1996). Tokyo, Japan: Kaibundo Publishing Co. Ltd.
1. Reader's Guide. The purpose of this introduction book is to provide the reader with a survey of the topics that are treated in the book, as well as some supplementary information about the book itself. The purpose of these first paragraphs is to provide the reader with a guide to the introduction itself.
The first section presents the purpose of the book as well as the rationale for writing it. It also provides some advice about who should read the book and who should not. This section should therefore be read by all, even the casual browser in the bookstore.
The second section briefly goes through the book in a chapter-by-chapter fashion. Readers who have not been
Hoc, J.-M., Cacciabue, P. C. & Hollnagel, E. (Eds.) (1995). Expertise and technology: Cognition and human-computer cooperation. Hillsdale, NJ: Lawrence Erlbaum Associates.
The technological development have changed the nature of industrial production so that it today no longer is a question of a human working with a machine, but rather that a joint human-machine system performs the task. This development, which started in the 1940's, has become even more pronounced with the proliferation of computers and the invasion of digital technology in all wakes of working life. The first area where radical changes took place was administration and clerical work - including such trades as typesetting and technical drawing. We are now seeing a similarly drastic change in the industrial process domain. Process automation has long been used to improve the efficiency and safety of industrial production, but the availability of cheap but powerful computers is at this time producing fundamental changes
Hollnagel, E. (1998) Cognitive Reliability and Error Analysis Method (CREAM). Oxford: Elsevier Science Ltd.
No man is an island entire of itself;
John Donne (c. 1572-1631)
A Fable. Once upon a time a bunch of well-meaning scientists started to get worried about the many errors that other people made. This was not just the slip of the tongue and the housewife everyday errors, or not even the many people that killed each other on the roads. Such things had become commonplace and nobody paid much attention to them. In particular, the newspapers rarely reported them on the front page. The scientists were rather concerned about errors in complex industrial systems where a single person could cause irreparable harm to many others. These were the kind of events that would grab the headlines and dominate the news - at least for a day or two.
Dekker, S. W. A. & Hollnagel, E. (Eds.), (1999). Coping with computers in the cockpit. Aldershot, UK: Asgate.
Computers in the Cockpit: Practical Problems Cloaked as Progress
Another book on aviation automation ? Well, perhaps this is not a book on aviation automation per se. It is a book, rather, on how an entire industry is coping with aviation automation. Or more precisely, on how it is coping with the human consequences of automation that it has fielded over the last two decades. The aviation domain, and the cockpit in particular, has always been seen to be on the forefront of technological and human-machine interface developments. From one angle, progress in the cockpit has been enormous, compared, say, with the technological state of the typical enroute air traffic control centre. But from another angle, such perceived progress has merely been the wrapping around a large
Hollnagel, E. (Ed.), (2003). Handbook of cognitive task design. Mahwah, NJ: Lawrence Erlbaum Associates.
What Is Cognitive Task Design?
In a Handbook of Cognitive Task Design (CTD) it is reasonable to begin by defining what CTD is. To do so it is useful first to make clear what it is not! CTD is not a new scientific discipline or academic field, neither is it a unique methodology.
CTD Is Not A New Scientific Or Academic Field
Since the mid 1970s, the terms cognitive and cognition have come to be used in so many different ways, that they have nearly lost their meaning. There has in particular been an abundance of more or less formal proposals for lines of activity or directions of study that as a common feature have included the terms cognitive and cognition in one way or another. These range from cognitive ergonomics, cognitive systems engineering, and cognitive work analysis to cognitive tools, cognitive task
Hollnagel, E. (2004). Barriers and accident prevention. Aldershot, UK: Ashgate.
Japanese translation: ヒューマンファクターと事故防止 (2006). Tokyo, Japan: Kaibundo.
Spanish translation: Barreras y prevención de accidentes (2009). Madrid, Spain: Modus Laborandi.
Accidents have probably happened since the first caveman lit the first fire, but for many centuries the consequences of accidents were mostly limited to the people directly doing the work – what we now call the people at the sharp end. The introduction of technology to the work process changed all that. This development may with some justification be linked to the industrial revolution and even more precisely to the year 1769 when James Watt patented the steam engine. The industrial revolution introduced the large-scale use of machines as part of human work beginning with mining and manufacturing, and thereby inevitably changed the very nature of work. Machines did not only make production faster and more ...
Hollnagel, E. & Woods D. D. (2005). Joint cognitive systems: Foundations of cognitive systems engineering. Boca Raton, FL: CRC Press / Taylor & Francis.
The Driving Forces
The focus of Cognitive Systems Engineering (CSE) how humans can cope with and master the complexity of processes and technological environments, initially in work contexts but increasingly also in every other aspect of daily life. The complexity of the current technological environment is not only something that must be mastered but paradoxically also provides the basis for the ability to do so. This entangling of goals and means is mirrored in the very concepts and theories by which we try to understand the situation of humans at work. To set the context, this chapter gives an overview of the scientific developments of the 20th century that have shaped our thinking about humans and machines.
(2005). Woods, D. D. & Hollnagel, E. (2006). Joint cognitive systems: Patterns in cognitive systems engineering. Boca Raton, FL: CRC Press / Taylor & Francis.
The gadget-minded people often have the illusion that a highly automatized world will make smaller claims on human ingenuity than does the present one ... This is palpably false.
NorbertWiener, 1964, p. 63
For almost 25 years, Cognitive Systems Engineering (CSE) has searched out and listened to stories of claims on human ingenuity as fields of practice have changed and adapted to new pressures, new devices, and new opportunities. In these stories, change challenged how activities are coordinated, how systems are resilient at boundaries, and how artifacts provide affordances.
This meant we could complement the previous book on joint cognitive systems, which focused on the foundational concepts, with a book that used stories of cycles of complexity and coping to show the main values, concepts and approaches of CSE in action. This book provides a way to look at our collective progress. The general storylines
Hollnagel, E., Woods, D. D. & Leveson, N. C. (Eds.) (2006). Resilience engineering: Concepts and precepts. Aldershot, UK: Ashgate.
Spanish translation: Ingenieria de la resiliencia: Conceptos y preceptos (2013). Madrid, Spain: Modus Laborandi.
Japanese translation レジリエンスエンジニアリング - 概念と指針 (2013). Tokyo, Japan: Juse.
Hindsight and Safety
Efforts to improve the safety of systems have often – some might say always – been dominated by hindsight. This is so both in research and in practice, perhaps more surprising in the former than in the latter. The practical concern for safety is usually driven by events that have happened, either in one’s own company or in the industry as such. There is a natural motivation to prevent such events from
Hollnagel, E., Nemeth, C. P. & Dekker, S. W. A. (Eds.) (2008). Resilience Engineering Perspectives, Volume 1: Remaining Sensitive to the Possibility of Failure. Aldershot, UK: Ashgate.
Efforts Resilience Engineering in a Nutshell
Since the publishing of Resilience Engineering: Concepts and Precepts (Hollnagel, Woods & Leveson, 2006), there has been considerable interest for what this approach to system safety really means and how resilience engineering can be brought into practice. At times the curiosity has been tempered by reasonable scepticism and doubts whether resilience engineering really is something new. This is, of course, a welcome challenge, and one that fortunately can be answered by an unqualified Yes and No. It can be answered in the positive because resilience engineering does offer a different approach to system safety as this book, and the previous, demonstrates. And it can be answered in the negative because resilience engineering does not require that methods and techniques that have been developed across industries over several
Nemeth, C. P., Hollnagel, E. & Dekker, S. W. A. (Eds.) (2009). Resilience Engineering Perspectives, Volume 2: Preparation and Restoration. Farnham, UK: Ashgate.
Preparation and Restoration is the second volume of Resilience Engineering Perspectives within the Ashgate Studies in Resilience Engineering series. In four sections, it broadens participation of the field to include policy and organization studies, and articulates aspects of resilience beyond initial definitions:
Preparation and Restoration addresses issues such as the nature of resilience; the similarities and differences between resilience and traditional ideas of system performance; how systems cope with varying demands and sometimes succeed and sometimes fail; how an organization's ways of preparing before critical events can enable or impede restoration; the trade-offs that are needed for systems to operate and survive; instances of brittle or resilient systems; how work practices affect resilience; the relationship between resilience and safety; and what improves or erodes resilience.
Hollnagel, E. (2009). The ETTO Principle: Why things that go right sometimes go wrong. Farnham, UK: Ashgate.
What is arguably one of the most influential papers in contemporary psychology starts rather tantalisingly as follows:
My problem is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals. ... The persistence with which this number plagues me is far more than a random accident. ... Either there really is something unusual about the number or else I am suffering from delusions of persecution.
(The paper in question is George Miller’s ‘The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information’ published in 1956. This paper introduced to the general public the notion of limitations in human short-term memory and attention, and proposed as quantification an integer that since then
Hollnagel, E. (Ed.) (2009). Safer Complex Industrial Environments. Boca Raton, FL: CRC Press / Taylor & Francis.
While a quick response can save you in a time of crisis, avoiding a crisis remains the best defense. When dealing with complex industrial systems, it has become increasingly obvious that preparedness requires a sophisticated understanding of human factors as they relate to the functional characteristics of socio-technology systems. Edited by industrial safety expert Erik Hollnagel and featuring commentary from leaders in the field, Safer Complex Industrial Environments: A Human Factors Approach examines the latest research on the contemporary human factors approach and methods currently in practice.
Drawing on examples mainly from the nuclear industry, the book presents a contemporary view on human factors in complex industrial systems. The contributors contrast the traditional view of human factors as a liability with the contemporary view that recognizes human factor as also an asset without which the safe and efficient performance of complex industrial systems would be impossible. It describes how this view has developed in parallel to the increasing complexity and intractability of socio-technical systems and partly as a consequence of that. The book also demonstrates how this duality of the human factor can be reconciled by recognizing that the human and organizational functions that can be the cause of adverse events are also the very foundation for safety.
Building on this, the book introduces theories and methods that can be used to describe human and collective performance in a complex socio-technical environment. It explores how contemporary human factors can be used to go beyond failure analysis to actively make complex industrial environments safer.
Hollnagel, E. Pariès, J. Woods, D. D. & Wreathall, J. (Eds) (2011). Resilience Engineering Perspectives Volume 3: Resilience Engineering in Practice. Farnham, UK: Ashgate.
Japanese translation: 実践レジリエンス エンジニアリング. Juse-P (2014).
The focus for safety efforts is usually, and traditionally, the unwanted outcomes, injuries and losses, that are the result of adverse events. This matches the common understanding of safety as ‘the freedom from unacceptable risk.’ Resilience engineering, however, defines safety as the ability to succeed under varying conditions. It is a consequence of this definition that it is equally important to study things that go right and things that go wrong. For resilience engineering, the understanding of the normal functioning of a socio-technical system is the necessary and sufficient basis for understanding how it fails. And it is both easier and more effective to increase safety by improving the number of things that go right, than by reducing the number of things that go wrong. The definition of resilience can be made more concrete by pointing to four abilities that are necessary for a system to be resilient. These are the ability to respond to events, to monitor ongoing developments, to anticipate future threats and opportunities, and to learn from past failures and successes alike. The engineering of resilience comprises the ways in which these four capabilities can be established and managed.
Sundström, G. A. & Hollnagel, E. (Eds.) (2011). Governance and Control in Financial Systems: A Resilience Engineering Perspective. Farnham, UK: Ashgate.
The Turmoil in the Financial Services System
At the time of finishing this book, i.e., December 2010, the impact of the 2007-2008 turmoil in the global Financial Services System (FSS) is still felt by the global economy and the Financial Services industry. In response to the crises, the industry continues to transform itself in front of our very eyes, trying to cope with something that has not yet been completely understood. National governments have intervened in various ways, referred to as rescue, bail-out or economic stimulus policies and programs, often reflecting political positioning as much as sound reasoning. Regulatory bodies such as the US Federal Reserve Bank, the European Central Bank, the Bank of Japan, the People’s Bank of China, and the Bank of England have taken unprecedented measures to stabilise the global Financial Services System, to the best of their understanding. New regulatory bodies have emerged, such as the European Systemic Risk Board and the US based ...
Hollnagel, E. (2012). FRAM - The Functional Resonance Analysis Method: Modelling Complex Socio-technical Systems. Farnham, UK: Ashgate.
Japanese translation: 社会技術システムの安全分析 : FRAMガイドブック. Tokyo, Japan: Kaibundo (2013).
Chinese translation: 功能共振分析方法--复杂社会技术系统建模. Beijing, China: National Defense Industry Press (2015).
Korean translation: 안전패러다임의 전환 II (안전시스템의 과거와 미래). Seoul, South Korea: SEJIN Publishing
Virtually all design is conducted in a state of relative ignorance of the full behaviour of the system being designed.
A State of (Relative) Ignorance
When a system is designed there is from the very beginning a need to know how it will function. Indeed, the very reason for system design is to construct an artefact that provides an intended functionality. In Henry Petroski’s book about ‘Design Paradigms,’ from which the above epigraph is taken, the topic was engineering design, and the majority of examples were physical structures and static systems, such as bridges. The ‘behaviour’ of a bridge is seemingly simple: it just has to be there and to maintain its own structure in order to allow safe passage for whoever or whatever uses it. Yet even in this case there is
Hollnagel, E., Braithwaite, J. & Wears, R. L. (Eds.) (2013). Resilient Health Care. Farnham, UK: Ashgate.
Japanese translation: レジリエント・ヘルスケア 複雑適応システムを制御する. Tokyo: e-hon. (2016)
Preface: On the Need for Resilience in Health Care
This book provides the first comprehensive description of resilient health care, henceforth referred to as RHC. Since there are probably only a few who, at the time of publication, have a clear idea about what this means, some words of introduction are appropriate. The simple explanation is that RHC is the application of the concepts and methods of resilience engineering to the field of health care, and in particular to the problems of patient safety. A more detailed explanation and exemplification are provided in the chapters that follow.
The motivation for RHC is threefold, as explained in this preface. The first is the sorry state of affairs in health care. The second is that attempts to improve this so far have had limited success. The third is the potential offer-
Hollnagel, E. (2014). Safety-I and Safety-II: The Past and Future of Safety Management. Farnham, UK: Ashgate.
Korean translation: 안전패러다임의 전환 I (안전시스템의 과거와 미래 I, Seoul, South Korea: SEJIN Publishing, 2015.
Japanese translation: Safety-I ＆ Safety-II ―安全マネジメントの過去と未来. Kaibundo, 2015.
Chinese translation: 安全-I与安全-II-安全管理的过去和未来. 中国工人出版社, 2015.
Italian translation: Safety-I e Safety-II: Il passato e il futuro del safety management. Hirelia (2016).
Chapter 1: The Issues
Safety is a word that is used frequently and in many different contexts. Because it is used so often we all recognise it and we all believe that we know what it means – it is immediately meaningful. Because it is immediately meaningful to us, we take for granted that this is the case for others as well. Indeed, when we talk about safety we are rarely, if ever, met with the question ‘what do you mean by that?’ We therefore make the – unwarranted – inference that other people understand the word safety in the same way that we do. The assumption that we all know and agree on what safety means is so widespread that many documents, standards, guidelines – and even doctoral theses (!) – do not even bother to provide a definition.
Nemeth, C. P. & Hollnagel, E. (2014). Resilience Engineering in Practise, Vol. II: Becoming resilient. Farnham, UK: Ashgate.
This is the fifth book published within the Ashgate Studies in Resilience Engineering series. The first volume introduced resilience engineering broadly. The second and third volumes established the research foundation for the real-world applications that then were described in the fourth volume: Resilience Engineering in Practice. The current volume continues this development by focusing on the role of resilience in the development of solutions.
Since its inception, the development of resilience engineering as a concept and a field of practice has insisted on expanding the scope from a preoccupation with failure to include also the acceptable everyday functioning of a system or an organisation. The preoccupation with failures and adverse outcomes focuses on situations where something goes wrong and the tries to keep the number of such events and their (adverse) outcomes as low as possible. The aim of resilience engineering and of this volume is to describe how safety can change from being protective to become productive and increase the number of things that go right by improving the resilience of the system.
Wears, R. L., Hollnagel, E. & Braithwaite, J. (Eds.) (20145. The Resilience of Everyday Clinical Work. Farnham, UK: Ashgate.
Health systems everywhere are expected to meet increasing public and political demands for accessible, high-quality care. Policy-makers, managers, and clinicians use their best efforts to improve efficiency, safety, quality, and economic viability. One solution has been to mimic approaches that have been shown to work in other domains, such as quality management, lean production, and high reliability. In the enthusiasm for such solutions, scant attention has been paid to the fact that health care as a multifaceted system differs significantly from most traditional industries. Solutions based on linear thinking in engineered systems do not work well in complicated, multi-stakeholder non-engineered systems, of which health care is a leading example.
A prerequisite for improving health care and making it more resilient is that the nature of everyday clinical work be well understood. Yet the focus of the majority of policy or management solutions, as well as that of accreditation and regulation, is work as it ought to be (also known as ‘work-as-imagined’)
Copyright © Erik Hollnagel 2016
All Rights Reserved.