Crisis Exercises: Measure, Don't Just Play
Organizations exercise crisis management without knowing what they are training. Without clear learning objectives and measurable criteria, exercise success is self-assessment. Which is systematically biased.
The Problem
Organizations exercise regularly. Tabletops, staff exercises, scenarios. At the end, everyone is exhausted, everyone considers it valuable, and nobody can say precisely what improved.
That is not a quality judgment about the participants. It is a structural problem.
Activity Is Not Performance
A football team measuring training success by kilometers run has confused cause and effect. Kilometers are an activity, not an objective. The same applies to crisis exercises.
Exercises without explicit learning objectives are well-intentioned busyness. When the objective is unclear, development is incidental.
Qualitative feedback alone does not produce verifiable progress. Self-assessments are systematically biased: participants rate their own performance more generously than observers do. Observers focus on visible dynamics rather than decision quality. Without predefined criteria, it is impossible to determine whether a team has improved in any strategically relevant dimension.
The Scenario Is Not the Goal
In many exercises, teams try to solve the scenario. They want to contain the fictional crisis, restore operations, reach a satisfactory ending. That is psychologically understandable. Methodologically, it is often wrong.
The scenario itself is arbitrary. Today ransomware, tomorrow supply chain disruption. The capabilities that actually matter are independent of the storyline: decision clarity, role discipline, prioritization logic, communication under ambiguity.
Treating the scenario as the goal means training narrative rather than capability. The team may “win” the exercise and have strengthened nothing that counts in a real situation.
Objectifying Complexity
Not all exercises are equal. But as long as there is no shared understanding of what makes an exercise complex, different exercises get compared as if they were the same.
Two dimensions drive exercise complexity:
Structural complexity: How many parallel problem clusters exist? One cluster is the baseline. Multiple parallel streams significantly increase coordination demands and cognitive load.
Dynamic complexity: How many injects over what time? Ten injects in two hours create a fundamentally different situation than ten injects in six hours.
Both dimensions interact: high inject density is substantially more demanding when multiple problem clusters run in parallel.
A simple complexity index makes this plannable and comparable, without claiming to measure “true” difficulty.
What Follows
Without objective complexity assessment, effects get mixed up. Poor performance in a highly complex exercise may actually reflect strong capability. A smooth exercise with low complexity may say little about real resilience.
Exercises should therefore be constructed around clearly formulated learning objectives and evaluated against transparent performance criteria. The scenario is the vehicle. The capability is the destination.
What is not defined cannot be measured. What is not measured cannot be systematically improved.
Quotable
“Exercises without learning objectives are well-intentioned busyness.”
“The scenario is the vehicle. The capability is the destination.”
“What is not defined cannot be measured. What is not measured cannot be systematically improved.”
→ How Rico Kerstan designs crisis exercises: Services → The methodological foundation: Approach