Posted by jakey_bakey 3 days ago
I will be slightly paraphrasing from memory there, but certainly was quite surprised how calm he was about the whole thing, there's no way I'd board one of those things.
There are variations on this depending on the plane model, of course. Some older planes can use an external starter for their engines, but I think that’s very rare now.
Now, interestingly, the 787 is a "bleedless" aircraft, so it doesn't use high-pressure air from the APU to spool up the engines. I believe it can use its hefty bank of lithium-ion batteries to start its engines if the APU (and associated electrical generator) is INOP.
Not a pilot/engineer - just an enthusiast. Someone more au fait with the 787 might be able to correct me on the above.
The idea being that the APU isn't particularly clean burning, not compared to power plant emissions. It's been a long while since I've heard anything about that plan, for or against.
Where "modern" here includes jet airliners made in the 70s yes.
>It is also needed to start the main engines
The engines need an air source, and the APU can be an air source, but at one point at least, big airlines preferred using ground hookup provided air sources for starting, in order to save gas. Next time you fly, look at the jetway. There will be a large yellow duct system underneath it that can be hooked up to the plane to provide pneumatic pressure and air conditioning air without starting the APU. There are similar hookups for electrical power so that a plane won't drain its battery during routine turnover operations.
The bottom price flights I've taken recently don't seem to hook either up though, preferring instead to start the APU during taxi to the gate while shutting down one engine, shut down the other engine once they are at the gate, and reverse the process to taxi back out to the runway. The turnaround time is so short, and the required work to clean and restock the cabin so little, I bet they just don't pay for ground hookups.
However, if it is not a test flight, a RAT deployment should make you very uncomfortable and worried…
I’ve been around a lot of airplanes and I can’t say I’ve seen or heard a ram air turbine deployed in flight. There was a recent incident involving a Frontier Airlines flight in which the RAT was deployed when the aircraft was put in emergency electrical configuration. The deployment of the RAT would be quite rare.
Also, RAT can power limited systems indefinitely on most models, not all or limited systems for a limited amount of time.
> After starting the descent, the flight crew made an announcement to the passengers; however, unbeknownst to the flight crew, the noise generated by the RAT (because of its high rotation speed) prevented the passengers and the cabin crew from hearing the announcement.
It always surprised me that there aren’t small, local lithium batteries to provide backup power for critical components like the smoke detectors. Is the risk of those catching fire considered too high?
There is, well, only lithium on the 787. If all power generation is dead, then the most critical flight instruments and gauges get about 20-30 minutes of power from the plane's batteries, things like your backup old fashioned gauges, the engine computers, and maybe some basic flight computer on newer planes. The RAT is intended to keep flight surfaces operational when everything else is utterly fucked, so it usually produces the same kind of energy as whatever the primary flight control system uses, which until recently was hydraulic power. On civilian airliners they generate tens of kilowatts. Airliners do not want to carry around an EV sized battery for the extremely rare occasions when you lose all systems, because that's a waste of gas. The RAT provides the same functionality for lower weight.
When the RAT is deployed, you do not care much whether a smoke detector is powered, you are already vectoring towards an attempted landing.
That said, a household smoke detector runs on next to no power. Obviously not the same device but surely it can operate on the same principle.
Jokes aside... I'm certainly part of the intended audience: point me at an interesting rabbit hole, and there I gooo.
A RAT provides backup electrical and/or hydraulic power for control surfaces (and other goodies). A RAT would certainly be inspected during a heavy check and likely even during line checks (e.g. an "A" check or equivalent). How often is gonna depend on the airplane. But to suggest that a critical piece of equipment isn't checked regularly is just silly.
Additionally, it's pretty much guaranteed that if an airplane comes with a RAT the RAT is required to be functional for ETOPS flights. That alone means you're gonna be inspecting it pretty frequently. ETOPS certification has three parts: airplane, airline, and humans. You'd want to look at the ETOPS Maintenance Document at whatever airline to be sure.
Outside of Asia (where domestic widebody flights are still common) I'd guess many if not most 787 flights are ETOPS flights.
I remember a decade or more ago I was on a US domestic flight - I forget exactly what, I think it was American from SFO to LAX - so I doubt it needed ETOPS. But the captain announced - while we were still at the gate - that he was getting an error in the cockpit saying the RAT was faulty. And he called maintenance, and they told him to try resetting something (a computer or circuit breaker or whatever) to see if that cleared the error - and when it didn’t, he announced we could not take off and would all have to go back into the terminal. Thankfully they had a spare plane a few gates over and they put us all on that (same crew, same passengers) so we only lost an hour or two.
Back to your flight, both the FAA and EASA require airliners to have a minimum equipment list (MEL). It's entirely unrelated to ETOPS (overwater flights). This list describes what equipment is required to be functional, what you can fly without and when. What's on the list is all going to come down to what kind of plane we're talking about. Could be you're not allowed to fly without a functional RAT ever. Could be that you can fly without a RAT as long as something else (e.g. APU) is functional. Could be you can only make a certain number of flights with a non-op RAT.
A real world example is that ATR 72 crash in Brazil recently. One of the PACKs (air conditioning / cabin pressurization) was not functioning on the accident plane. Per the MEL you can dispatch an ATR in that condition, but you're limited to a service ceiling of 17,000 ft. Unfortunately that put the flight in direct conflict with the weather.
Gravity?
You are constrained by battery, airspeed, and altitude.
“Probably” is doing a lot of work here, there could be a power failure without engine failure.
This discussion has nothing to do with engine out failure modes.
The 787's APU is not intended to run during flight. If it's running, you're in an engines-out scenario.
More interesting, a root cause analysis: https://news.ycombinator.com/item?id=33239443 https://ioactive.com/reverse-engineers-perspective-on-the-bo...
The 47 bit timestamp at 32MHz would explain the duration (Though not why it isn't 33MHz?).
Reverse Engineer’s Perspective on the Boeing 787 ‘51 days’ Directive - https://news.ycombinator.com/item?id=33239443 - Oct 2022 (55 comments)
Boeing 787s must be rebooted every 51 days to prevent 'misleading data' (2020) - https://news.ycombinator.com/item?id=33233827 - Oct 2022 (140 comments)
Boeing 787s must be turned off and on every 51 days (2020) - https://news.ycombinator.com/item?id=27117320 - May 2021 (42 comments)
Boeing 787s must be turned off and on every 51 days - https://news.ycombinator.com/item?id=27111650 - May 2021 (4 comments)
Boeing 787s must be turned off and on every 51 days to prevent 'misleading data' - https://news.ycombinator.com/item?id=22761395 - April 2020 (152 comments)
It just so happens that 2^52 nanoseconds is a little bit over 52 days.
I've seen the same thing with AMD CPUs where they hang after ~1042 days which is 2^53 10-nanosecond intervals.
You can also drop 524535643, an integer clearly less than 2^53 and is off by 5.
This is even seen here:
#include <stdio.h>
int main() {
float b = 524535643.0f;
printf("%f", b);
return 0;
}
output: 524535648.000000Seriously they should have posted here for some help!
What happened? Well it turns out there was a timer that no one used that overflowed and caused an interrupt which wasn’t handled any more, the interrupt handler fell through, caused a halt and the WDT fired fire rebooting it and some idiot hadn’t stored that one setting in the NVRAM.
So then we had more problems. 5000 things with EPROMs in that were rebooting every 24 days which were spread all over the planet. Many questions to ask over how the hell it ended up like that.
I hope people are asking these sorts of questions at Boeing.
Edit: also the source code we had did not match what was on the devices. Turned out the engineer who provided the hex file hadn’t copied that code to the file server and had left a year before hand. We didn’t find that until the WDT fired and piqued our interest and could reproduce it on the dev board because the software was different (should have checked that past the label on the ROM which was wrong!)
I don't see how that is possible given the maintenance required for these planes. Even the simple A checks ground a plane for hours every couple hundred flights while D checks take months to complete every 6-10 years.
Edit: minute not hour
You normally only count unplanned downtime in those stats for aircraft.
(1 - 0.999999) * (60 * 24 * 365)
EDIT: This chart agrees: https://en.wikipedia.org/wiki/High_availability#Percentage_c...
All together
The flight was saved.
6-7 nines is a lot of nines and we’ve had a couple of issues in quick succession now
I don't think twice about sleeping on a flight because I've already made my bed at that point - nothing I can do if something goes wrong.
(Well, I've woken my wife when a doctor was called for before, but that's about the extent of my usefulness.)
Besides, a plane crash is far from the worst way to go. Dramatic, sure.
But dementia? Cancer? That’s often a pretty miserable death.
Plenty of things out there to get worried about if you want.
Maybe they used to, but Boeing has been doing rather worse and that’s the point here isn’t it?
We’re just going to see more and more issues like this as more and more software is used in applications like this. I would be willing to bet that a Tesla would also spontaneously crash if left on for hundreds of hours, but they just rarely if ever are left on that long.
As a more mundane example: the wifi on planes does temporary [edit: DHCP, not NAT] leases. But the system on many has expiration windows on the order of hours, possibly more than a day... Couple that with the number of passengers planes serve and busy routes can easily exhaust the lease pool.
The solution: there's a button the flight attendants can push to reboot the router, dumping the lease table.
But I guess we’re talking about the same people who made the mistake in the first place…
Users VPNing into the reused address space for their own home VPN are probably knowledgeable enough to figure out what is going on and a small enough user base to not care about.
Edit: I feel bad for saying IPv4 sucks. It's one of my favourite pieces of tech and an astonishingly good one at that. It just doesn't have a big enough address space.
Now, I think VPN applications are smarter than that and will still get the outgoing packet to the default gateway (citation and research needed), but what happens when it doesn't know to handle a route automagically. For instance, with DHCP, a router can tell your computer what DNS server to reach out to. If that's on the local network, now you see all DNS requests actually routing into the network on the other side where you almost certainly aren't going to be talking to a DNS server. And now, you can't go to any websites.
Hopefully this helps. I'm not the most knowledgeable about VPNs and routing, but I'm pretty sure this is all fairly accurate.
If the plane network uses 10.0.0.0/8; and then the VPN you're trying to connect to uses 10.0.0.0/8, stuff breaks.
Though I suppose it's not worth it when you can hit 'reboot'.
100 bucks says IPv6 would still not get implemented. We need legislation at this point. There's enough stubborn assholes in the networking infrastructure industry just refusing to do their job for it to happen by itself. They will insist they need to save a few thousand bucks and hold the whole damn world back.
Their job is to make traffic work on the chunk of the Internet they administer. If they can do that with IPv4, they're doing the job.
If there were things unreachable by protocols other than IPv6 that people needed, that could force the issue, but there aren't.
If this issue was in a car, we would never know as no one keeps their car running for 50 days straight.
This particular problem has been known for years (the article is from 2020).
And just to get ahead of: “Well what about the 737 MAX”, that was a system specification error, not due to “buggy” software failing to conform to its specification. The software did what it was supposed to do, but it should not have been designed to do that given the characteristics of the plane and the safety process around its usage.
Exactly: the system was designed to fly the plane into the ground if a single sensor was iced up, and that's exactly what the software did. Boeing really thought this system specification was a good idea.
They modified the flight characteristics of the system. They tuned the control scheme to provide the "same" outputs as the old system. However, the tuning relied on a sensor that was not previously safety-critical. As the sensor was not previously safety-critical, it was not subject to safety-critical requirements like having at least two redundant copies as would normally be required. They failed to identify that the sensor became safety critical and should thus be subject to such requirements. They sold configurations with redundant copies, which were purchased by most high-end airlines, but they failed to make it mandatory due to their oversight and purchasers decided to cheap out on sensors since they were characterized as non-safety-critical even if they were useful and valuable. The manual, which pilots actually read, has instructions on how to disable the automatic tuning and enable redundant control systems and such procedures were correctly deployed at least once if not multiple times to avert crashes in premier airlines. Only a combination of all of those failures simultaneously caused fatalities to occur at a rate nearly comparable to driving the same distance, how horrifying!
A error in UX tuning dependent on a sensor that was not made properly redundant was the "cause". That is not a "stupid mistake". That is a really hard mistake and downplaying it like it was a stupid mistake underestimates the challenges involved designing these systems. That does not excuse their mistake as they used to do better, much better, like 1,000x better, and we know how to do better and the better way is empirically economical. But, it does the entire debacle a disservice to claim it was just "being stupid". It was not, it was only qualifying for the Olympics when they needed to get the gold medal.
Whistleblower testimony indicated it wasn't a failure to identify it as safety critical, but a conscious decision not to mention it as such to the regulator, and not implement it as a dual sensor system as doing so would have caused the design to require Class D simulator training; which Boeing was relying on the abscence of as a selling point to prevent existing airlines from defecting to Airbus.
>They sold configurations with redundant copies, which were purchased by most high-end airlines, but they failed to make it mandatory due to their oversight and purchasers decided to cheap out on sensors since they were characterized as non-safety-critical even if they were useful and valuable.
Incorrect. All MAX's have two AoA vanes, each paired to a single Flight Computer. The plane has two Flight Computers, one on each side of the cockpit, and the computer in command is typically alternated between each flight. One computer per flight will be considered in-command (henceforth referred to as Main), the other will be henceforth referred to as operating as "auxillary". The configuration you're thinking of is an AoA disagree light, implemented by enabling a codepath in software running on the Main FC whereby a cross-check of the value from the AoA vane networked to the auxillary FC would light up a warning light to inform pilots that system automation would be impacted, because the AoA values between the MFC and AFC differed. A pilot would be expected to recognize this as and adapt behavior accordingly/take measures to troubleshoot their instruments. Importantly, however, this feature had zero influence on MCAS. MCAS only took into account inputs from the vane directly wired to the Main FC. While a cross-check happened elsewhere for the sole purpose of illuminating a diagnostic lamp, there was no cross-check functionality implemented within the scope of the MCAS subsystem. The MCAS system was not thoroughly documented in any delivered to the pilot documentation. The program test pilot got specific dispensation to leave that out of the flight manual. See the Congressional investigation, final NTSB, and FAA report.
>The manual, which pilots actually read, has instructions on how to disable the automatic tuning and enable redundant control systems and such procedures were correctly deployed at least once if not multiple times to avert crashes in premier airlines.
The documentation, which included an Airworthiness Directive and NOTAM, informed pilots any malfunction should be treated in the same manner as a stabilizer trim runaway. Said problem is characterized in aviation parlance as a continual uncommanded actuation of trim motors. MCAS, notably is not that. It is periodic, and in point of fact, it ramps up in intensity over time until over 2° of travel are commanded by the computer per actuation event, with the timer between actuations being reset to 5 seconds by use of the on yoke Stab trim switches. This was ncommunicated to pilots. Furthermore, there were design changes to the Stab-Trim Cutout switches between 737NG (MAX's predecessor), and MAX. In the NG, the Stab Trim cutout could isolate the FC alone, or both FC and yoke switches from the Stab Trim motor. In MAX, however, the switches were changed to never isolate the FC from the Stab trim motors, because MCAS being operational was required for being able to checkmark FAR compliance for occupant carrying aircraft. So when that cutout was used, all electrically assisted actuation of the horizontal stabilizer became unavailable. The manual trim wheel would be the only trim input, and in out-of-trim attitudes, would result in such excessive loading on the control surface that physical actuation without electronic assistance was not feasible on the timescales required to recover the plane. There was a maneuver known to assist with these conditions (when they occurred at high altitude) called "roller coastering" in which you dive further into the undesired direction to unload the control surface to render it actuable. This technique has not been in official documentation since Dino 737 (Pre-NG). The events you're referring to when uncommanded actuations were recovered on other flights, happened at high altitudes, and were recovered with countered electrical stab switch actuation followed by Stab trim cutout within the reset 5 second watchdog timer prior to MCAS activation subsequent to a Stab-trim yoke control switch actuation. This procedure, and the implementation details needed to fully understand its significance, were undocumented prior to the two crashes. Furthermore, this procedure to cut out MCAS/the MFC from the stab trim motor and finishing the flight in a completely manually trim controlled configuration meant that technically you were flying an aircraft in a configuration that could not be certified to carry passengers when taking the FAR's prescriptively, and uncompromisingly rules-as-written with zero slack offered for convenience, because MCAS was necessary for grandfathering the MAX under the old type cert, and without MCAS functional, it's technically a new beast, which is non-compliant with control stick force feedback curves when approaching stalls, which by the way, just to make it clear, a compliant curve has been a characteristic of every civil transport in all jurisdictions worldwide for well over 50 years. This was not documented and only became apparent after investigation. Again, see the House findings, FAA report, and NTSB.
>Only a combination of all of those failures simultaneously caused fatalities to occur at a rate nearly comparable to driving the same distance, how horrifying!
Oh, the multi-billion dollar aircraft maker built a machine that crashes itself, gaslit it's regulators, pilots, airlines, and the flying public to juice the stock price so executives could meet their quarterly incentives, and diverted tunds away from it's QA and R&D functions to do stock buybacks, move HQ away from the factory floor, and try to union bust. With over 300 direct measurable deaths within a couple of months and multiple years worth of grounding and mandated redesigns to fix all the other cut corners we've been unearthing, and veritable billions of dollars of loss incurred in delays. Heavens, it could happen to anybody. How could you possibly see this as something to get upset about? /s
As you can see from my final statement, I made no argument that it was not a travesty. It was ABSOLUTELY UNACCEPTABLE. This is not a defense of their inadequacy.
I was pointing out how it is absolutely incorrect to claim that it was a "stupid mistake". That argument is used by people implicitly arguing that "If only Boeing used modern software development practices like Microsoft/Google/Crowdstrike/[insert big software company here] then they would have never introduced such problems". That is asinine. As can be seen from your explanation, the problem is multi-faceted requiring numerous design failures in both implementation, integration, and incentives. In fact, the problems are even more subtle and pernicious than in my original explanation that was derived from high level summaries rather than the investigation reports themselves.
I do not know if this has changed in the last few years, but at Microsoft you were required to have 1 whole randomly-selected person, with no required domain expertise, say they gave your code, in isolation, a spot check before it could be added. This is the same process applied regardless of code criticality, as they do not even has a process to classify code by criticality. This is viewed as a extraordinary level of process and quality control that most could only dream of achieving. Truly if only Boeing threw out whatever they were doing and adopted such heavyweight process by "best-in-class" software development houses they would have discovered and fixed the 737 MAX problems.
Boeing does not need to adopt modern software development "best practices" and whatever crap they use at Microsoft/[insert big software company here] that introduces bugs faster than ant queens. The processes in play that created the 737 MAX already make Microsoft and its peers look like children eating glue, but they are inadequate for the job of making safe aerospace software and systems. What Boeing needs to do is re-adopt their old practices that make the 737 MAX development processes look like a child eating glue. The 737 MAX was not stupid, it was inadequate. BOTH ARE UNACCEPTABLE, but the fix is different.
Second, 787s have been flying for ~13 years and ~4.5 million flights [1]. Assuming they were unaware of the problem for the majority of that time, their unknowing maintenance and usage processes avoided critical failures due to the stated problems for a tremendous number of flights. Given they now know about it and are issuing a directive to enhance their processes to explicitly handle the problem, we can assume it is even less likely to occur than previously which was already experimentally determined to be ludicrously unlikely. Suing someone into oblivion for a error that has never manifested as a serious failure and that is exceedingly unlikely to manifest is a little excessive.
Third, they should be remediating problems as they arise balanced against the risks introduced by specification changes and against the alternative of other process modifications. Given Boeing’s other recent failings, they should be given strict scrutiny that they are faithfully following the traditional, highly effective remediation processes. It should only be worrisome if they are seeing disproportionately more problems than would be expected in a aircraft design of its age and are not remediating problems robustly and promptly.
I appreciate your point of view. The air travel industry is undeniably safe, moreso than any transportation system ever. By a large margin. On the other hand, it is possible to make software systems that do not have the defects described in the article. So how do we get to the place where we choose to build systems that behave correctly? I don't think we get there without severe penalties for failure.
I disagree: the Japanese shinkansen bullet train system has never had a fatal accident, except for a single incident 30 years ago when someone was caught in a door and dragged 100 meters. No fatalities from collisions, derailings, etc., ever, since the 1960s. That's far safer than air travel could ever claim to be.
Even other train systems have better records than commercial aviation, in general. Plane crashes are rare these days, but they still happen once in a while, and the results are usually catastrophic.
Are planes safer than cars? Well of course, but that's a really, really low bar: cars are driven by all kinds of morons who frequently (esp. in the US) have little to no training or testing, are frequently distracted, don't have a copilot who can take over at any time, and are frequently operating in a very, very chaotic environment (like city streets). It's truly a wonder there aren't more fatal crashes. But safer than trains in general? I seriously doubt it.
US commercial aviation averages ~1 trillion passenger-miles per year [2]. So if we compare the last 4 years of US aviation that is a comparable number of passenger-miles.
Over the last 4 years recorded on this dataset (2019-2022)[3] it looks like there were 5 fatalities total. Over the last 4 years recorded on this dataset (2018-2021)[4] it looks like there were 2 fatalities total.
So, while it does not appear to be safer, it is within a few factors on a passenger-mile basis. Furthermore, there are multiple periods of 4 trillion consecutive passenger-miles where there were 0 recorded accidents. It nowhere near obvious that it is “far safer than air travel could ever claim to be” and certainly a much closer race than you believed given your other assertions.
[1] https://www.statista.com/statistics/1262752/japan-jr-high-sp...
[2] https://www.transtats.bts.gov/traffic/
[3] https://www.bts.gov/content/us-air-carrier-safety-data
[4] https://www.airlines.org/dataset/safety-record-of-u-s-air-ca...
Second of all, even if we do use your metric which only cares about passenger-trips per event it still does not matter. The Shinkansen has transported ~6.4 billion people since inception. As seen in the second link I provided above, US commercial aviation serves ~900 million passengers per year. So, that is 7 years of US commercial aviation to transport the same number of people the Shinkansen has ever transported. As seen on the third link the last 7 years (2016-2022) had ~6 fatalities and as seen on the fourth link the last 7 years (2015-2021) had 2 fatalities compared to the 1 fatality on the Shinkansen.
Third of all, given that the Shinkansen has transported ~6.4 billion people, but averages 150 million people per year and ~60 billion passenger-miles per year, we can reasonably conclude that I overestimated at ~3.6 trillion passenger-miles and it would likely actually be ~2.4 trillion passenger miles or just 2.5 years of US aviation. From the third link that would be a mere 1 fatality and from the fourth link 0-1 fatalities.
If we extend our analysis to the last decade the third link indicates 15 fatalities over ~10 trillion passenger miles, ~2x the Shinkansen rate, and the fourth link indicates 2 fatalities over ~10 trillion passenger miles, ~50% the Shinkansen rate. Again, broadly comparable, but it is hard to truly tell which one is "safer" than the other. And again, they are clearly in the same ballpark and not dramatically different as you implied.
https://nationalelevatorindustry.org/elevators-escalators-ar...
US airlines haven't had a single fatal crash in 15 years.
https://nypost.com/2019/08/22/video-shows-moment-man-crushed...
What failure? The planes work. This is puritanism.
The whistleblowers dying is coincidental and convenient.
https://www.theguardian.com/business/article/2024/may/02/sec...
2. I'm not sure how a few whistleblowers dying disproves "responsibility does not get easily sloughed off". If anything, they're getting extra responsibility than is warranted. Every time there's something wrong with a Boeing product, people almost reflexively start posting about how it must be caused by corner cutting by Boeing, or how it's yet more evidence that Boeing it circling the drain. This happens even for planes that's are decades old, have a solid service history, and by all accounts are probably caused by pilot error or improper maintenance.
Point being, when you start talking about high QA systems, where the Quality is non-negotiable (you will have everything documented and tested); barring exec/managerial malfeasance in preventing that work from being done, you reach for the same simple things over and over again since it takes a hell of a lot of work to actually characterize and certify a thing to the requisite level of reliability/operating conditions.
Testing ain't free, ya know.
That’s a reboot.
And then it turns itself off if it's not used for a while. I hate printers.
You're telling me Aerospace's "real engineering-level" is worse than something a sophomore can cook up ?
The main thing that gets checked is the worst-case timing analysis for every branch condition. And there are stack monitors to monitor if the stack is growing in size.
Look at Rapita System's website for more info ... we don't use them, but they explain it well.
Guess how I typically reboot things :)
BTW I have an AGM ("advanced glass mat") battery in my 1995 Toyota which has a completely analog charging system, and it doesn't get cooked, so it's not because there's something special about the battery.
https//www.heise.de/en/news/BMW-Huge-recall-and-profit-warning-due-to-defective-Conti-brakes-9864793.html
Indeed when you get close to exhausting the main battery rack it starts selectively shutting down everything. I’ve never personally let mine get to 0% ever - but for instance a Tesla is continuously on, and if you use sentry mode it’s not just on but the GPU is constantly doing classification of the environment to determine if someone is prowling your vehicle.
Why do you say this?
I assume that after this the software was soak-tested for weeks / months to eliminate that class of bug. Naval Reactors is many things, but repeating the same mistake twice isn’t one of them.