“You should fly. It’s safer.”
It’s a fact. The odds are in your favor when compared to auto travel. It’s not even close, we often remind the flight-fearing traveler.
Yet two of the smartest people I have known refuse to fly despite agreeing with this statistic. I think of them every time I board a flight. It was no different when I returned from vacation on a JetBlue flight that retraced some of the northbound path taken by a JetBlue flight some weeks earlier.
The flight in question was likely also filled with vacationers, an Airbus A320 journeying from Cancún International Airport in Mexico to Newark Liberty International Airport on October 30, 2025. As reported the next day in the New York Times, passengers aboard Flight 1230 were likely dozing off two and a half hours into the flight from the Mexican resort when they were jolted to attention by a sudden drop in altitude. When the unwanted acrobatic maneuver was over, 15 or more passengers involuntarily extended their vacations at hospitals in Tampa, where the JetBlue pilots managed a successful emergency landing.
JetBlue representatives said the plane was taken out of service for inspection.
There are occasional commercial aviation mishaps, along with the rare disaster. For the flying public, reports of these mishaps are regular enough that a news story like this one can be safely tucked away until a broader pattern emerges. Understandably, had that been the end of the story, especially considering that it’s been Boeing planes subject to unwanted excitement of this variety since the 737 Max woes began. The plane involved in this JetBlue flight, as well as the one where I now occupied seat 23A, was an Airbus, thought to be less subject to what an FAA Expert Panel Report in February 2024 called “gaps in Boeing’s safety journey.”
That’s this plane, I thought to myself. What did the ensuing investigation uncover? Mechanical failure? Software, perhaps even malware (though very unlikely)?
The “cause”
The Times dutifully reported the results of that investigation in a story they ran on November 28, 2025, the same day of my return flight to JFK. The story, adapted in part from an Airbus press release, opened with:
The European airplane maker said a recent incident had shown that “intense solar radiation may corrupt data critical to the functioning of flight controls.”
The impact on the Airbus A320 fleet and the airlines that have deployed them would be widespread. Potentially, 6,000 planes could be affected, with some requiring hardware changes, The Times reported after studying the extent to which the A320 has been chosen by airlines worldwide.
That’s a major impact, I thought. And then I read,
In most cases, the issue can be addressed relatively quickly by reverting to a previous software version.
Hold on.
“Single event upset”
So, how often is this radiation impact occurring? What’s the underlying physics?
It turns out that there’s a history of science story to be told. Nobel Prize winner Victor Hess discovered the existence of cosmic rays by measuring radiation levels at higher altitudes during balloon flights. That was before World War I. By the 1960s, it was a concern for crew health in the space program. A 1975 paper identified a satellite anomaly as a “single-event upset” (SEU) caused by radiation, and by 1979, it was understood that electronics could be disrupted at sea level.
By 1990, the FAA acknowledged that there was a risk to air travelers. The Airbus A320, a fly-by-wire aircraft with greater reliance on computers, was designed during a time when this phenomenon was well understood in the aeronautical engineering community.
As software engineers at the time would have understood, addressing SEUs would require hardware and software features. ECC memory, for instance. Sanity checks, too, in case a flipped bit would produce wildly out-of-range instructions.
Features. So why would rolling back to an earlier version of software be the remedy for the A320 late in 2025? Wouldn’t features to address SEU from radiation in the current release also be present in earlier releases? Or perhaps was there some new, innovative way to address SEU that would supersede the approaches taken in earlier releases?
Software life cycle management fundamentals
At this point, the software quality engineer in me perked up. Somehow, the entire radiation part of the story seemed curiously irrelevant.
I was not to be the only one asking questions.
Tony Fernandez is the founder of AirAsia Aviation Group. AirAsia’s current operations include a significant number of Airbus jets, and it has orders for almost 400 more Airbus aircraft. Fernandez expressed his concerns in a recent interview.
“It seems a bit bizarre; it is something it [Airbus] has to look at, it’s obvious it wasn’t tested,” Fernandez told South China Morning Post. Fernandez speculated that Airbus might be stressed by commitments to meet its current goal of producing 820 planes in 2025.
Apart from the eye-catching radiation storyline, the surface cause was described as a problem in the Elevator Aileron Computer (ELAC 2) software, which manages the aircraft’s pitch and roll by sending commands from the pilot’s side-stick to elevators at its rear. The software rollback involved restoring a version dubbed “L103,” replacing “L104.” Thales, developer of the ELAC hardware, told Reuters that the software involved was not its responsibility.
The sudden nose-down movement experienced by the JetBlue passengers bound for Newark could also affect a wide variety of Airbus models, not only the A320: multiple variations of the A319 and A321 series are also affected. One estimate speculated that more than 3,000 A320 jets were in the air around the time of the Airbus announcement. Picture those passengers on November 28, many equipped with WiFi on board, reading about this software vulnerability and mentally reworking their safety calculations.
4 SDLC lessons. 2 inherent risks unbowed
Beyond sensationalized “solar radiation” or “cosmic ray” talk, there’s a serious software development life cycle (SDLC) conversation to be had at Airbus. I’d list these four main takeaways, all of them familiar:
- Test engineering
- CI/CD and pipeline quality
- Observability
- Supply chain
Notice that “coding” is not one of the takeaways. True, the L104 might have introduced a build packaging mistake, or a coding error, such as an accidentally deleted call to a service, misconfigured or misunderstood API parameters, or even a syntax error in the source code. These are all normal; a healthy SDLC addresses them.
- Test engineering. A good test harness could simulate the out-of-band telemetry that was supposed to invoke the range checks for ELAC. Test harnesses need to be designed and built concurrently with software, not as a separate stage.
- CI/CD and pipeline quality. You can add Change or Configuration Management to this group. Whether or not Airbus used CI/CD in its SDLC pipeline in L104, it will be moving in that direction. Something broke in the SDLC pipeline, which allowed this fundamental feature to “regress.” The cause is unlikely to be a one-off.
- Observability. I’m adding this one partly because I’m part of a small working group at IEEE working on a new version of the DevOps standard that will incorporate observability principles. The SEU is an excellent use case for observability because a test engineer can’t wait for a cosmic ray to strike, yet must ensure that a failure to respond to a critical SEU event can be observed. This involves more careful requirements engineering and understanding of the telemetry needed for critical features.
- Supply chain. While Thales may not have been involved in the problematic release of L104, the Airbus platform includes software and hardware from many suppliers. Integration across the computing supply chain presents challenges magnified by both the safety requirements and the desired product longevity. The apparent “Not our problem” response from Thales could indicate issues with joint information and problem-solving, or even limited visibility for the third-party risk management team.
It could get worse, a lot worse. Lapses in any one of these four areas of the SDLC create opportunities for malware, a nightmare scenario for avionics.
AI to the rescue?
I worry that two inherent risks remain largely unmitigated: complexity and specialization. Managing the entire A320 feature set, or even its software bill of materials (SBOM / API-BOM), is a nontrivial undertaking. Consider that it is a product launched in 1987 with the need to support legacy hardware and software across a large installed base. Both complexity and specialization will march steadily toward greater opacity, challenging attempts at traceability, transparency and manageability.
No doubt Airbus, like the rest of us, will bring AI on board to help out — perhaps in L105. This will add both to the specialties needed and to the plane’s overall complexity. Considering public trepidation over AI generally, at what point will more travelers feel the odds are no longer in their favor?
This article is published as part of the Foundry Expert Contributor Network.
Want to join?