SHELL Analysis Explained: Human Factors Investigation for Safety
Most workplace incidents are not caused by a single mechanical failure or a single moment of inattention. They result from a system where multiple elements — the tools people use, the procedures they follow, the environments they work in, and the interactions between individuals — fail to fit together properly. When a safety investigation only asks "who made the mistake," it misses the structure that made the mistake likely.
The SHELL model was built to address exactly that gap. It is a human factors framework that maps the relationships between the central operator and every significant element of the system around them. Originally developed for aviation, it has since been adopted by ICAO as a standard investigative lens and applied across manufacturing, healthcare, and any industry where human performance is a safety-critical variable.
This article explains what each SHELL component represents, how the interfaces between components produce failure, and how the model is used in practice.
Background: Where the SHELL Model Came From
The model was first introduced by Elwyn Edwards in 1972. Edwards was working in aviation human factors research and was trying to create a structured way to think about how humans interact with the systems around them. His original framework — called the SHEL model — identified four core elements: Software, Hardware, Environment, and Liveware.
In 1975, Frank Hawkins, a human factors consultant who worked extensively with Boeing, refined the model and gave it the visual "building block" structure that most practitioners recognize today. The extra L was added to represent the second liveware interface — the human-to-human dimension — making the acronym SHELL.
ICAO adopted the model into its safety management guidance and it now appears in ICAO Doc 9859, the Safety Management Manual. The adoption matters because it signals an international consensus that human factors analysis is not optional in serious incident investigation — it is a required component of understanding what actually happened and why.
The core premise of the SHELL model, as articulated through ICAO's guidance, is that the human is rarely if ever the sole cause of an accident. The relevant question is not which person failed, but how the components surrounding that person created conditions where failure became the likely outcome.
The Five Components
Software (L-S Interface)
Software in the SHELL model does not refer exclusively to computer programs. It covers everything that governs how work is done: standard operating procedures, regulations, checklists, training programs, work instructions, company policies, and conventions.
A mismatch at the L-S interface occurs when the procedures a person is expected to follow do not fit how work actually happens. This can mean a checklist that was written for an older process and never updated to reflect current equipment. It can mean a regulatory requirement that is technically compliant but impractical to execute under real operating conditions, leading workers to develop informal workarounds. It can mean training that covers the nominal scenario but leaves gaps for the situations that actually occur.
A practical example: a maintenance technician is required to follow a written procedure that assumes the equipment has been fully de-energized and locked out. In practice, a specific configuration of the equipment makes the standard lockout procedure ambiguous. The technician has done this task dozens of times and has developed a personal approach that seems to work. The procedure is never updated because no incident has occurred yet. The L-S mismatch exists and is accumulating risk whether or not an incident has been triggered.
Hardware (L-H Interface)
Hardware covers the physical elements of the system: equipment, controls, displays, tools, vehicles, protective gear, facilities. The L-H interface is where human capabilities and limitations meet the physical design of the systems people use.
Poor ergonomic design is a classic L-H failure. A control panel where similar switches are positioned close together increases the probability of selection errors. A diagnostic display that requires multiple steps to reach critical information slows response under pressure. Personal protective equipment that is technically compliant but uncomfortable enough that workers avoid wearing it for extended periods creates a systemic safety gap.
Hardware mismatches are often easy to identify after an incident and difficult to justify addressing before one. The physical evidence makes them visible. What the SHELL model emphasizes is that the gap between human capability and hardware design should be assessed proactively, not discovered through failure.
Environment (L-E Interface)
Environment covers the full context in which work occurs. This includes the immediate physical environment — lighting, temperature, noise, workspace layout, weather conditions for outdoor operations — and the broader organizational, regulatory, and social environment that shapes how work is managed and prioritized.
Physical environment effects on performance are well-documented. Inadequate lighting increases error rates on precision tasks. Extreme temperature or high humidity degrades both cognitive performance and physical dexterity. High ambient noise makes verbal communication unreliable and can cause workers to miss critical signals.
The broader organizational environment matters just as much. A production culture where schedule pressure is consistently prioritized over safety procedure compliance is an environmental condition. A shift structure that creates chronic fatigue is an environmental condition. A regulatory environment where enforcement is inconsistent enough that non-compliance carries low perceived risk is an environmental condition. None of these appear on a checklist, but all of them shape how the central human operator actually performs.
Liveware — Central Operator (the Hub)
The central Liveware is the human at the center of the model: the operator, technician, pilot, nurse, or worker whose performance is being analyzed. This component represents the individual's current state — their training, experience, physical condition, fatigue level, stress, and cognitive load.
The SHELL model places Liveware at the center not to assign blame but to acknowledge that human performance is variable and context-dependent. A highly skilled technician performing a familiar task under normal conditions operates very differently from the same technician performing a novel task under time pressure after an extended shift. Both are the same person. The difference is the state they are in and the conditions they are working under.
Understanding the central Liveware means collecting information about relevant individual factors: current training currency, time on task, hours worked, any known stressors, task familiarity, and whether conditions at the time of the incident matched what their training prepared them for.
Liveware — Other People (L-L Interface)
The second Liveware component represents all other people in the system: supervisors, colleagues, crew members, dispatchers, managers. The L-L interface captures the human-to-human dimension of work: communication, leadership, teamwork, authority gradients, and how information flows between people.
This is often the most consequential interface in serious incidents. Communication failures, breakdowns in crew coordination, and situations where hierarchy suppresses the transmission of safety-critical information have been contributing factors in some of aviation's most studied disasters.
The 1977 Tenerife airport collision — the deadliest accident in aviation history — involved a significant L-L failure. A combination of communication ambiguity between the flight crew and air traffic control, time pressure, and an authority gradient that made it difficult for crew members to challenge the captain's decision contributed to the outcome. No individual failure in isolation caused it. The system of human interactions did.
The 1978 crash of United Airlines Flight 173 is similarly instructive. The crew became so focused on troubleshooting a landing gear indication that the first officer and flight engineer failed to effectively communicate fuel state concerns to the captain in time to prevent the aircraft from running out of fuel. The L-L interface — the dynamics of how information moves between people in authority relationships — was a primary contributing factor.
How the Interfaces Drive the Analysis
The SHELL model is most useful not as a checklist of components but as a map of interfaces. An incident rarely lives entirely within one component. It typically emerges from a mismatch between components: a procedure (S) that doesn't match equipment capability (H), a physical environment (E) that creates conditions the operator (L) was not trained for, or a communication dynamic (L-L) that prevented a known risk from being escalated before it became a failure.
The investigator's task is to systematically examine each interface and identify where mismatches exist. Which procedures were inadequate for the actual task? Which equipment design decisions created error traps? Which environmental conditions degraded performance in ways not accounted for in the standard procedure? Was the operator's state at the time of the incident consistent with what the task required? Were there team or communication factors that contributed?
This systematic approach shifts the investigation from finding fault to finding the conditions that enabled failure — a distinction that makes the difference between corrective actions that prevent recurrence and corrective actions that simply assign blame.
Applying SHELL Outside Aviation
The SHELL model originated in aviation and remains central to aviation safety practice, but its logic applies wherever human performance interacts with systems. Manufacturing, healthcare, energy, and construction have all adopted the framework with varying degrees of formal structure.
In healthcare, SHELL analysis has been used to examine surgical errors where the interface between the surgical team (L-L), the equipment available (H), the operating room environment (E), and the procedure protocols in use (S) all contributed to an adverse outcome. The model provides a way to examine a medical error without reducing it to a question of individual practitioner competence.
In manufacturing, L-H mismatches — control interfaces that create selection errors, machinery guards that make legitimate work difficult, displays that are hard to read under production floor lighting — are common contributors to both quality failures and safety incidents. SHELL analysis surfaces these systematically rather than attributing errors to worker carelessness.
The model's application to any safety-critical context follows the same logic: identify the central human operator, map the five components and their current state, examine each interface for mismatches, and use those mismatches to define targeted corrective actions.
WhyTrace Plus supports structured human factors investigations with guided frameworks and documentation tools — designed for EHS managers and safety engineers who need thorough, traceable incident analysis. See how it works.
Limitations and Practical Considerations
The SHELL model is a conceptual framework, not an algorithm. It does not tell you how deep to investigate each component or how to weigh competing contributing factors against each other. It is a lens for organizing information and ensuring systematic coverage, not a mechanism for automatically producing conclusions.
Some practitioners note that the original SHELL model does not explicitly capture organizational factors as a distinct component. This led to the development of SHELLO, which adds an "O" for Organizations — recognizing that management structures, safety culture, and institutional priorities shape all four interfaces and deserve their own analysis thread. For organizations conducting serious incident investigations, considering organizational factors as a discrete dimension alongside the original five components is worth incorporating.
The model is also most effective when the people using it understand human factors well enough to know what kinds of data to collect for each interface. A SHELL analysis conducted by an investigator who is not familiar with cognitive load, authority gradient effects, or ergonomic failure modes will underutilize the framework. It is a starting point for human factors investigation, not a substitute for domain expertise.
If your current incident investigation process stops at identifying who was involved rather than how the system created conditions for failure, the SHELL model offers a practical structure for doing more thorough work. Explore how WhyTrace Plus supports structured RCA.
Related Resources
| Article | Description |
|---|---|
| 5 Whys vs Fishbone Diagram vs Fault Tree: Which RCA Method to Use | Side-by-side comparison of three core RCA methods and when to apply each |
| 5 Whys Analysis: Complete Guide with Examples | Step-by-step guide to running a structured 5 Whys investigation |
| OSHA Incident Investigation Requirements | What OSHA expects from incident investigations and how to meet those requirements |
| CAPA Management: Closing the Loop on Corrective Actions | How to track and verify corrective actions so findings actually prevent recurrence |