Tejas at Dubai: Pride, Pressure and the Need for Meaningful Safety Investigations
Aviation safety investigations are not about finding a culprit; they are about keeping the next crew alive. In any serious Safety Management System (SMS), investigation is the central learning engine. When that engine is weak, captured, or opaque, the rest of the safety architecture becomes cosmetic—nice on paper, fragile in reality.
The recent Tejas crash at the Dubai Airshow, where an experienced test pilot lost his life in front of a global audience, is a painful reminder of this truth. It shows, once again, that human factors and organisational culture sit in the background of every technical failure. It also exposes the limits of our current investigation model—especially when inquiries are conducted behind closed doors, under military rules, with limited permeability to external expertise.
If we choose, this accident can push us beyond mere “resilience” (bouncing back) towards anti-fragility (emerging stronger). That choice will be made in how we investigate, who we allow into the room, and whether we are willing to let the truth reshape our system.
The invisible weight in the cockpit: pride, pressure, cognitive lock-up
An accident may arise from many factors—technical, environmental, organisational. But in the last critical moments, it is often the human burden that shapes outcomes.
A display pilot at an international airshow like Dubai is not just flying an aircraft. He is carrying:
The pride of performing on a world stage,
The reputation of an indigenous aircraft programme,
The implicit message is that “India can match the best”.
Inside that context, powerful internal pressures build up:
The world is watching; this must look perfect.
This is an Indian-built aircraft; we cannot lose face.
I must complete the manoeuvre as briefed; breaking off will look like failure.
None of this appears in any SOP, yet it sits heavily in the cockpit.
Under such psychological load, cognitive lock-up becomes a real risk. Once the mind commits to a low-level, time-critical manoeuvre, it can become very hard to abandon or modify it, even when cues suggest it should be discontinued. Deviations are rationalised (“I can still make it”), early exit options are discounted, and attention narrows to “finishing the figure” while margins in height, energy or geometry silently erode.
This is not about blaming the pilot. It is about recognising how human cognition behaves under prestige, pressure and expectation—and designing our training, display profiles and culture to protect the pilot from those invisible forces. An investigation that ignores this inner reality and looks only at stick movements and engine parameters will miss the most important lessons.
Investigation is the heart of SMS and accident prevention
A genuine SMS is a living system: it identifies hazards, assesses risk, monitors performance and continuously improves. Safety investigations sit in the middle of this loop:
They convert events into information,
Information into insight,
Insight into action.
When done properly, they refine training and SOPs, influence procurement and test planning, and change how leaders think about risk. When handled superficially, they deliver a convenient label—“pilot error”, “unexpected weather”, “technical malfunction”—and a few generic recommendations. The system then absorbs the shock and returns to its old shape. That is resilience, but not growth.
SMS thinking demands three uncomfortable acknowledgements:
Most accidents are system failures, not individual failures.
Organisations drift: over time, economic, political and operational pressures erode safety margins.
Learning requires honesty, and honesty requires both psychological safety and some degree of independence.
Investigation is where these truths either surface—or are buried.
Tejas: closed inquiry, closed learning?
By current practice, the Tejas crash will be examined by a Court of Inquiry under the defence forces’ rules. That is necessary, but not sufficient.
A Court of Inquiry in a military setting typically means:
A closed-door process,
Investigators drawn entirely from within the same hierarchy,
A strong focus on technical and operational aspects,
Minimal involvement of independent Human Factors (HF) experts from the civil side.
This is not a criticism of the officers involved; the problem is structural. A uniform, tightly-knit group will inevitably face blind spots:
A tendency to protect the image of the service or the programme,
Hesitation to question assumptions made at higher levels,
Unconscious bias when national prestige is at stake.
In addition, there is a non-permeable membrane between military and civil safety ecosystems. Civil HF specialists and accident investigators bring decades of experience in structured HF analysis and SMS practice. When they are kept out, both sides lose: the military loses fresh perspective, and civil aviation loses access to critical lessons.
We should retain Courts of Inquiry where required, but:
Decision-makers should deliberately allow an outside view. Make the structure permeable enough for HF and safety experts from the civil side to contribute—whether as members, observers or formal peer reviewers.
This is not about putting the armed forces on public trial. It is about honouring the pilot by ensuring that every useful perspective is heard, and that the system emerges stronger, not merely defended.
The CDS Mi-17 crash: when explanation stops too early
In the CDS General Bipin Rawat helicopter accident, the preliminary statement spoke of entry into clouds due to an “unexpected change in weather”, leading to loss of situational awareness and controlled flight into terrain.
For seasoned pilots who have flown in hills “for donkeys’ years”, the idea that clouds can rapidly develop and cover ridgelines is neither new nor unexpected. That is basic mountain meteorology. To stop at “unexpected clouds” is to stop the inquiry too early.
A fair and transparent HF-oriented investigation would have probed deeper:
What meteorological information and local knowledge informed the go/no-go decision?
What organisational norms exist for VIP or high-prestige flights in marginal conditions?
Was there implicit pressure that “this mission must go”?
How robust were CFIT-avoidance training and procedures for that specific terrain and context?
To simply classify it as “human error” or “unexpected weather” is a comfortable closure, not a complete explanation.
Culture, reverse engineering and the missing voice of safety
Another uncomfortable reality: safety is often the last to be consulted when it matters most.
When new assets are procured, new technology is tested, or international displays are planned, the sequence is often:
A strategic or political decision is made: this aircraft will participate, this profile will be flown.
Operations and engineering are tasked to “make it happen”.
Safety is invited at the end to sign the paperwork, often under subtle pressure.
If an accident occurs, the investigation risks being “reverse-engineered” to protect the original decision, with the narrative shaped towards a convenient human or technical error.
This is an effective system for reputation management, but a weak system for accident prevention.
A healthier culture would:
Invite safety early, before profiles and timelines are frozen;
Define explicit, non-negotiable “red lines” (weather, margins, configuration) beyond which operations simply do not proceed;
Accept that investigations may reveal uncomfortable truths about planning, resourcing and leadership.
To say “safety decides when enough is enough” has meaning only if safety is actually empowered to stop or reshape high-risk operations—even when the world is watching.
From resilience to anti-fragility
Resilience means surviving a shock and returning to the prior state. Anti-fragility means using each shock to move to a better state.
A safety system becomes anti-fragile when each accident or serious incident leads to:
Visible, specific changes in procedures, training and oversight,
Stronger protections for honest reporting and self-disclosure,
Better integration of HF thinking into everyday planning and decision-making,
Structural reforms—more independence, more permeability, clearer veto powers for safety.
If, after Tejas and the CDS Mi-17 crash, we end up only with thicker checklists and sterner briefings, we have merely bounced back. We have not grown.
Five immediate steps for organisations and HF
To move in the right direction, five practical steps can be taken now:
- Make investigations structurally permeable
Retain military Courts of Inquiry where needed, but:
Embed external HF/SMS experts from the civil side as members, observers or peer reviewers.
Create a joint civil–military HF panel to examine major accidents and share de-identified lessons across all of aviation. - Establish or strengthen an independent safety investigation board
Give it statutory independence from operators, programme offices and chains of command.
Make accident prevention its sole mandate, clearly separate from disciplinary or legal functions.
Commit to publishing factual and final reports with appropriate redactions. - Protect safety information from misuse
Legally firewall CVR/FDR data, statements and voluntary reports from routine punitive use.
Make it clear that honest mistakes will be used to fix systems, not to destroy careers.
Reserve punishment for wilful violations and gross negligence. - Embed Human Factors into SMS and critical decisions
Build permanent HF teams with direct access to senior leadership.
Require HF analysis for new aircraft, high-risk tests, airshow routines and all major accidents.
Use recurring HF themes—cognitive lock-up, authority gradients, time pressure—to shape training and leadership development. - Give safety a real gateway role
No major procurement, new technology demonstration or international display should proceed without a robust, documented safety case.
Give safety managers the authority to say “stop” or “not yet”, with appeal only at the highest accountable level.
Normalise conservative calls; cancelling a sortie for safety reasons must be seen as professionalism, not loss of face.
If we treat the Tejas crash and the CDS Mi-17 tragedy as isolated misfortunes, we will patch a few procedures and move on. That is resilience—better than nothing, but less than what the lost crews deserve.
If instead we confront the human pressures in the cockpit, open up closed structures, invite outside expertise, strengthen independence and give safety a genuine voice in critical decisions, we move towards anti-fragility: a system that becomes safer precisely because it is willing to learn, openly and honestly, even when the lessons hurt.
That willingness to learn—fairly, transparently and without fear—is the deepest respect we can offer to those who flew before us and did not return.
Discover more from Safety Matters Foundation
Subscribe to get the latest posts sent to your email.
Leave a Reply