Ignored warnings, design flaws & the Air India plane crash

AFTER reading the newly released preliminary report on the tragic crash of Air India flight AI171, I knew I had to speak up. On June 12, a Boeing 787-8 Dreamliner took off from Ahmedabad, heading for London Gatwick. Within seconds of lift-off, both engines lost power. The aircraft plunged into the BJ Medical College campus, barely a mile from the runway. All but one passenger of the 242 people on board — 230 passengers, 10 cabin crew, and two pilots — were killed. Among them were children, students, doctors and foreign nationals.

Reading the report’s findings, it’s clear: many of our worst fears were right. This was not a random fluke. It wasn’t turbulence or a sudden weather event.

As a physicist, I study how systems fail — how entropy creeps in, how a tiny lapse can solve the whole. That’s exactly what happened here. This wasn’t one mistake. It was a network of ignored warnings, design flaws, regulatory gaps, and human confusion, all converging into catastrophe.

Let’s break it down. The flight data recorder shows that the aircraft had just hit take-off speed — around 180 knots — when both engine fuel switches moved from “Run" to “Cutoff." That’s not supposed to happen. The cockpit voice recorder captures one of the pilots asking, “Why did you cut off?", to which the other replies, “I didn’t." The aircraft, now powerless, began to fall. In less than a minute, the Dreamliner, one of the most advanced passenger planes in the world, was gone.

A known defect, ignored: Here’s where science meets silence. Back in 2018, the US Federal Aviation Administration (FAA) issued a Special Airworthiness Information Bulletin (SAIB). It flagged a critical vulnerability in the fuel control switches used in Boeing aircraft, including the 787. The locking feature — which prevents accidental shutdown — could become disengaged. It meant that a switch could be moved without intention, possibly in flight.

But instead of issuing a mandatory directive, the FAA merely “advised" the airlines to look into it. Air India didn’t act. The inspections were not legally required, so they weren’t performed. The aircraft continued to fly with a known weak point in its design. And that one overlooked switch, no bigger than a mobile SIM card, may have doomed hundreds of lives.

In physics, this is what we call sensitivity to initial conditions: a butterfly effect. One small variable, left uncorrected, can bring down an entire system. In aviation, the principle is the same. But here, it’s not natural law, it’s human decision-making.

The report reveals more. On the day of the crash, the aircraft was flying with four unresolved technical issues, including faults in cockpit surveillance, printer systems and nitrogen generation. These were deferred under the “Minimum Equipment List" (MEL) provisions, which allow aircraft to operate under certain known conditions.

While none of these alone would cause a crash, they point to a deeper problem: an airline that is operating flights with layers of compromise. In science, we call this accumulated system stress. In real life, it’s what happens when “safe enough" becomes normal.

The captain was highly experienced, with over 8,000 hours on the 787. The co-pilot had only recently qualified and had zero command hours on this aircraft type. The crash report shows no sign of intentional error, but the voice recordings hint at panic and confusion. A Ram Air Turbine (RAT) deployed automatically, attempting to generate backup power. The pilots tried to restart the engines midair. But at that altitude, they had less than 30 seconds. This wasn’t just a mechanical failure. It was also a human-machine breakdown, where cockpit design, crew training and stress dynamics all interacted under extreme conditions — and lost.

Systemic questions we must ask: We must now ask hard questions — scientific, institutional and moral: Why are safety warnings treated as optional until a crash makes them urgent? Why did regulators, manufacturers and airlines allow a known switch vulnerability to persist? Why is cockpit redundancy not designed to protect against unintentional shutdown during the most critical phase of flight? Why does aviation culture still tolerate deferred maintenance as business-as-usual?

Let’s be clear: Boeing is responsible. So is Air India. So are the FAA and other global regulators. But beyond blame, we have an opportunity — a rare one — to re-engineer how aviation safety works.

What needs to change now? Advisories should be treated as binding. If a safety issue could lead to loss of thrust, it should automatically trigger mandatory inspection worldwide. Transparency must be non-negotiable. Airworthiness records, maintenance logs and pilot readiness shouldn’t live behind corporate walls. They should be visible to watchdogs, if not the public. Design thinking must change. Systems must anticipate not just mechanical failure but human error, stress and edge cases. International coordination must improve. When one country finds a flaw, others must not wait to act.

Failures can be our greatest teachers if we learn from them. But learning requires honesty. It requires urgency. And it requires breaking the culture of “wait and watch" that has seeped into global aviation. Flight AI171 didn’t fall from the sky by chance. It was brought down by a series of decisions, omissions and warnings unheeded.

Let’s not reduce this to a tragic chapter in a report. Let this be the turning point where safety is no longer negotiable. We owe that much to the 242 people who boarded that flight, expecting to land.

Nishant Sahdev is a physicist, University of North Carolina, US.

Comments