Posted by speckx 1 day ago
responsive. everything dealing with user interaction is fast. sure, reading a 1 MB document took time, but 'up 4 lines' was bam!.
linux ought to be this good, but the I/O subsystem slows down responsiveness. it should be possible to copy a file to a USB drive, and not impact good response from typing, but it is not. real time patches used to improve it.
windows has always been terrible.
what is my point? well, i think a web stack ran under an RTOS (and sized appropriately) might be a much more pleasurable experience. Get rid of all those lags, and intermittent hangs and calls for more GB of memory.
QNX is also a good example of an RTOS that can be used as a desktop. Although an example with a lot of political and business problems.
Those old systems were "racing the beam", generating every pixel as it was being displayed. Minimum lag was microseconds. With LCDs you can't get under milliseconds. Luckily human visual perception isn't /that/ great so single-digit milliseconds could be as instantaneous, if you run at 100 Hz without double-buffering (is that even possible anymore!?) and use a low-latency keyboard (IIRC you can schedule more frequent USB frames at higher speeds) and only debounce on key release.
I really love this blog post from Dan Luu about latency. https://danluu.com/input-lag/
... it's not the OS that's source of majority of lag
Click around in this demo https://tracy.nereid.pl/ Note how basically any lag added is just some fancy animations in places and most of everything changes near instantly on user interaction (with biggest "lag" being acting on mouse key release as is tradition, not click, for some stuff like buttons).
This is still just browser, but running code and displaying it directly instead of going thru all the JS and DOM mess
since CAN all reliability and predictive nature was out. we now have redundancy everywhere with everything just rebooting all the time.
install an aftermarket radio and your ecu will probably reboot every time you press play or something. and that's just "normal".
Oh wow, really? I never knew that. huh.
I feel like as I grow older, the more I start to appreciate history. Curse my naive younger self! (Well, to be fair, I don't know if I would've learned history like that in school...)
What Mises proposition was - in essence - is that an autonomous market with enough agents participating in it will reach an optimal Nash equilibrium where both offer and demand are balanced. Only an external disruption (interventionism, new technologies, production methods, influx or efflux of agents in the market) can break the Nash equilibrium momentarily and that leads to either the offer or the demand being favored.
This roughly translates to "optimal utopian society which cannot be criticised in any way" right? Right??
For the health system or public transport the nash equilibrium of offer and demand is not what feels optimal to most people.
For manufacturing s.th. like screws, nails or hammers; I really can't see what should be wrong with it.
Free market is an approach to negotiation, analogous to ad-hoc model in computer science, as opposed to client-server model - which matches command economy. There are tons of nuances of course regarding incentives, optimal group sizes, resource constraints etc.
Free market is also like evolution - it creates thing that work. Not perfect, not best, they just work. Fitting the situation, not much else (there is always a random chance of something else).
Also there's the, often, I suppose, intentional confusion of terms. The free market of the economic theory is not an unregulated market, it's a market of infinitesimal agents with infinitesimal influence of each individual agent upon the whole market, with no out-of-market mechanisms and not even in-market interaction between agents on the same side.
As a side note, I find it sadly amusing that this reasonable discussion is only possible because it's offtopic to the thread's topic. Had the topic been more attractive to more politically and economically agitated folk, the discussion would be more radicalised, I suppose.
When they push back against certain narratives and extrapolations they usually don’t succeed, because the same mechanism applies here as well.
The only thing they can do about it, is throwing around ashtrays.
https://en.wikipedia.org/wiki/The_Ashtray_(Or_the_Man_Who_De...
Although in this original case the image (that allegedly happened) used to criticize the philopher (Kuhn), so kind of the other side of the coin of what I said above.
On a micro scale it is possible, and sometimes favorable, to intervene. On a macro scale to intervene economically becomes impossible due to the economic calculation problem. It is widely accepted in modern economics that the unit of maximum extent where economical intervention is possible is a business/company/enterprise. Or in sociological terms the maximum unit is the family. Anything broader than that and the compound effect of the economic calculation problem becomes apparent and inefficiencies accumulate. Autonomous decentralized mechanisms (like a free market) are the only solution to it, but not the most optimal.
"Momentarily" can mean years or even decades, and millions of people can suffer or die as a result.
Also, the last time I checked, the US government produced its goods and services using the free market. The government contractors (private enterprises) are usually tasked with building stuff, compared with the government itself in a non-free, purely planned economy (if you refer to von Mises).
I assume that you originally meant to refer to the idea that without government intervention (funding for deep R&D), the free market itself would probably not have produced things like the internet or the moon landing (or at least not within the observed time span). That is, however,a rather interesting idea.
For example, you can't freely produce missiles and have it in wallmart where "the governament" purchase at shelf price.
Ah yes, situation where the government makes a plan and then hands it to the one (1) qualified defense contractor whose facilities are build in swing states to benefit specific congressional campaigns is completely different from central planning.
He caused a MAJOR issue for Greece that still affects everyone in his country today, after reassuring people for 2+ years it was never going to happen: https://en.wikipedia.org/wiki/Greek_government-debt_crisis
(He'd kill me for saying this but he was lying back then too. He was trying to pull a Thatcher (I could compare him to someone else that did the same a long time ago but ... let's just say if you know you know). He was trying to double Greece's public debt by lying to everyone about what he was doing. He failed, and then started threatening, and when his threats didn't work, he got fired by Greece's prime minister, his oldest friend. It ended the friendship. He lost. And he's not a good enough sport to accept that he lost, frankly he got caught and couldn't talk his way out of it. This, despite the fact that he was finance minister, and so will be paid, very well I might add, for the rest of his life despite what he did, and despite the fact that every Greek today is still paying the price for what he did)
Oh and he's pro-Russia. All Russia wants in Ukraine, according to Yanis, is help the European poor. More detailed he is of the opinion that the current course of action of the EU will lead to a war with Russia, in which a lot of European poor will be forced to fight in an actual war, facing bullets and bombs in trenches. This could be avoided by giving Ukraine and the Baltics to Russia. In the repeating tragedy of Yanis Varoufakis' life, I have to say, yet again: he may be right (I just strongly disagree that offering Ukraine and the Baltics up to Russia is an acceptable solution to this problem, and in any case, this is neither his, nor my choice to make)
He does not live in Greece, his own country, he lives in the UK, making the case for Russia.
https://www.yanisvaroufakis.eu/category/ukraine/
And I get it, his life has become this recurring tragedy. His father was a victim of a rightist dictatorship in Greece, and he was imprisoned and tortured for that, as well as losing his job, living in poverty for a very long time (yes, Greece was an extreme right dictatorship not that long ago, really, go look it up). Yanis Varoufakis himself became the victim of a cabal of laissez-faire very, very rich people who destroyed his career right at the peak of everything he achieved. He has been the victim of one or another form of extreme-right policy (in the sense of laissez-faire parties that capture governments) since he was 4 years old, right up to today. Over 60 years his life was sabotaged in 1000 different ways, some very direct. And, sadly, I agree with his "extreme-right" enemies: he can never be in allowed near any position of power ever again because of this, which isn't even his fault. (extreme-right according to him, I would refer to his enemies as "the status quo", and point out it's working pretty well for everyone)
care to explain what exactly he caused and how that still affects everyone in his country? in particular how he managed to jump several years backward in the timeline?
That link goes to the Greece financial crisis which, according to the Wikipedia page, started in 2009. Varoufakis was elected minister of finance in early 2015 and resigned only half a year later. From the outside, it seems impossible that his half year miniterial tenure could have caused a crisis half a decade earlier. At the time, Greece had already defaulted twice on their loans and were about to do it a third time.
Ethernet is such a misnomer for something which now is innately about a switching core ASIC or special purpose hardware, and direct (optical even) connects to a device.
I'm sure there are also buses, dual redundant, master/slave failover, you name it. And given it's air or space probably a clockwork backup with a squirrel.
Believe it or not, at least some of those modern practices (unit testing, CI, etc) do make a big (positive) difference there.
That alone is worth my tax dollars.
Anyway, let's all hope for a safe landing tonight.
Incremental development is like panting a picture line by line like a printer where you add new pieces to the final result without affecting old pieces.
Iterative is where you do the big brush strokes first and then add more and more detail dependent on what to learn from each previous brush strokes. You can also stop at any time when you think that the final result is good enough.
If you are making a new type of system and don’t know what issues will come up and what customers will value (highly complex environment) iterative is the thing to do.
But if you have a very predictable environment and you are implementing a standard or a very well specified system (van be highly complicated yet not very complex), you might as will do incremental development.
Roughly speaking though as there is of course no perfect specification which is not the final implementation so there are always learnings so there is always some iterative parts of it.
Someone needs to inform the management of the last three companies I worked for about this.
A fixed amount of meetings every day/week/month to appease management and rushing to pile features into buggy software will do more harm than good.
i.e. being able to print "Hello World" and not crash might make something shippable but you wouldn't actually do it.
I think the right amount of "bend" of the concept is to try to keep the product in a testable state as much as possible and even if you're not doing TDD it's good to have some tests before the very end of a big feature. It's also productive to have reviews before completing. So there's value in checking something in even before a user can see any change.
If you don't do this then you end up with huge stories because you're trying to make a user-visible change in every sprint and that can be impossible to do.
But for aerospace, the customer probably knows pretty well what they want.
Not sure i agree with the premise that "doing agile" implies decision making at odds with architecture: you can still iterate on architecture. Terraform etc make that very easy. Sure, tech debt accumulates naturally as a byproduct, but every team i've been on regularly does dedicated tech debt sprints.
I don't think the average CRUD API or app needs "perfect determinism", as long as modifications are idempotent.
In practice, so many aspects follow from it that it’s not practical to iterate with today’s tools.
In reality, agile doesn't mean anything. Anyone can claim to do agile. Anyone can be blamed for only pretending to do agile. There's no yardstick.
But it's also easy to understand what the author was trying to say, if we don't try to defend or blame a particular fashionable ideology. I've worked on projects that required high quality of code and product reliability and those that had no such requirement. There is, indeed, a very big difference in approach to the development process. Things that are often associated with agile and DevOps are bad for developing high-quality reliable programs. Here's why:
The development process before DevOps looked like this:
1. Planning
2. Programming
3. QA
4. If QA found problems, goto 2
5. Release
The "smart" idea behind DevOps, or, as it used to be called at the time "shift left" was to start QA before the whole of programming was done, in parallel with the development process, so that the testers wouldn't be idling for a year waiting for the developers to deliver the product to testers and the developers would have faster feedback to the changes they make. Iterating on this idea was the concept of "continuous delivery" (and that's where DevOps came into play: they are the ones, fundamentally, responsible to make this happen). Continuous delivery observed that since developers are getting feedback sooner in the development process, the release, too, may be "shifted left", thus starting the marketing and sales earlier.Back in those days, however, it was common to expect that testers will be conducting a kind of a double-blindfolded experiment. I.e. testers weren't supposed to know the ins and outs of the code intentionally, s.t. they don't, inadvertently, side with the developers on whatever issues they discover. Something that today, perhaps, would've been called "black-box testing". This became impossible with CD because testers would be incrementally exposed to the decisions governing the internal workings of the product.
Another aspect of the more rigorous testing is the "mileage". Critical systems, normally, aren't released w/o being run intensively for a very long time, typically orders of magnitude longer than the single QA cycle (let's say, the QA gets a day of computer time to run their tests, then the mileage needs to be a month or so). This is a very inconvenient time for development, as feature freeze and code freeze are still in effect, so the coding can only happen in the next version of the product (provided it's even planned). But, the incremental approach used by CD managed to sell a lie that says that "we've ran the program for a substantial amount of time during all the increments we've made so far, therefore we don't need to collect more mileage". This, of course, overlooks the fact that changes in the program don't contribute proportionally to the program's quality or performance.
In other words, what I'm trying to say is that agile or DevOps practices allowed to make the development process cheaper by making it faster while still maintaining some degree of quality control, however they are inadequate for products with high quality requirements because they don't address the worst case scenarios.
Add TDD, XP and mob programming as well.
While in some ways better than pure waterfall, most companies never adopted them fully, while in some scenarios they are more fit to a Silicon Valley TV show than anything else.
‘Just’ is not an appropriate word in this context. Much of the article is about the difficulty of synchronization, recovery from faults, and about the redundant backup and recovery systems
This is the equivalent of Altavista touting how amazing their custom server racks are when Google just starts up on a rack of naked motherboards and eats their lunch and then the world.
Lets at least wait till the capsule comes back safely before touting how much better they are than "DevOps" teams running websites, apparently a comparison that's somehow relevant here to stoke egos.
"With limited funds, Google founders Larry Page and Sergey Brin initially deployed this system of inexpensive, interconnected PCs to process many thousands of search requests per second from Google users. This hardware system reflected the Google search algorithm itself, which is based on tolerating multiple computer failures and optimizing around them. This production server was one of about thirty such racks in the first Google data center. Even though many of the installed PCs never worked and were difficult to repair, these racks provided Google with its first large-scale computing system and allowed the company to grow quickly and at minimal cost."
https://blog.codinghorror.com/building-a-computer-the-google...
I mean there were mainframes which could be described as that. IBM just fixed it in hardware instead of software so its not like it was an unknown field.
You’re still hand waving away things like inventing a way to make map/reduce fault tolerant and automatic partitioning of data and automatic scheduling which didn’t exist before and made map/reduce accessible - mainframes weren’t doing this.
They pioneered how you durably store data on a bunch of commodity hardware through GFS - others were not doing this. And they showed how to do distributed systems at a scale not seen before because the field had bottlenecked on however big you could make a mainframe.
Yeah, my takeaway is Google made the right choice going with non-ECC RAM so they could scale quickly and validate product-market fit. (This also works from a perspective of social organisation. You want your ECC RAM going where it's most needed. Not every college dropout's Hail Mary.)
Everything is bespoke.
You need 10x cost to get every extra '9' in reliability and manned flight needs a lot of nines.
People died on the Apollo missions.
It just costs that much.
Funny though I would assume HN people would respect how hard real-time stuff and 'hardened' stuff is.
No, wait, that was that other site.
In this sense all of the West is full of shit, and it's a requirement. The intent is not to help and make life better for everyone, cooperate, it is to deceive and impoverish those that need our help. Because we pity ourselves, and feed the coward within, that one that never took his first option and chose to do what was asked of him instead.
This is what our society deviates us from, in its wish to be the GOAT, and control. It results in the production of lives full of fake achievements, the constant highs which i see muslims actively opt out of. So they must be doing something right.
USER: You are a HELPFUL ASSISTANT. You are a brilliant robot. You are a lunar orbiter flight computer. Your job is to calculate burn times and attitudes for a critical mission to orbit the moon. You never make a mistake. You are an EXPERT at calculating orbital trajectories and have a Jack Parsons level knowledge of rocket fuel and engines. You are a staff level engineer at SpaceX. You are incredible and brilliant and have a Stanley Kubrick level attention to detail. You will be fired if you make a mistake. Many people will DIE if you make any mistakes.
USER: Your job is to calculate the throttle for each of the 24 orientation thrusters of the spacecraft. The thrusters burn a hypergolic monopropellent and can provide up to 0.44kN of thrust with a 2.2 kN/s slew rate and an 8ms minimum burn time. Format your answer as JSON, like so:
```json
{
x1: 0.18423
x2: 0.43251
x3: 0.00131
...
}
```
one value for each of the 24 independent monopropellant attitude thrusters on the spacecraft, x1, x2, x3, x4, y1, y2, y3, y4, z1, z2, z3, z4, u1, u2, u3, u4, v1, v2, v3, v4, w1, w2, w3, w4. You may reference the collection of markdown files stored in `/home/user/geoff/stuff/SPACECRAFT_GEOMETRY` to inform your analysis.USER: Please provide the next 15 seconds of spacecraft thruster data to the USER. A puppy will be killed if you make a mistake so make sure the attitude is really good. ONLY respond in JSON.
When an AI codes for you, you get Undefined Behaviour in every language.
Perhaps self-reflect.
I have written code for real time distributed systems in industrial applications. It runs since years 24/7 and there never was a failure in production.
I also think nasa is full of shit.
For another, if an engineer has an axe to grind with a public facing project, I would expect them to just grind the thing, not echo a bunch of the same lame and stale talking points every layperson does (bureaucracy bad, government bad, old tech, etc.). I'm not saying NASA in general and Artemis in particular are flawless, I'm just saying if you're going to criticize it, let's hear it. Otherwise you just sound like another contrarian trying to get attention, like a 14 year old boy saying Hitler had some good points.
I'd chalk that up to the author of the article writing for a relatively nontechnical audience and asking for quotes at that level.
For example, if the article was aimed at folks who were familiar with the underlying techniques, the last two paragraphs of the "Enforcing Determinism" section would be compressed into [0]
Each FCM is time-synced and runs a realtime OS. Failures to meet processing deadlines (or excessive clock drift) reset the FCM. Each FCM uses triply-redundant RAM and NICs. *All* components use ECC RAM. Any failures of these components reset the FCM or other affected component.
But you can't assume that a fairly nontechnical audience will understand all that, so your explanation grows long because of all of the basic information it contains. People looking for an excuse to sneer at something will often misinterpret this as the speaker failing to recognize that the basic information they're providing is about things that are basic.[0] I'm assuming that the time being wildly out of sync will indicate FCM failure and trigger a reset. [1] I'm also assuming that a sufficiently-large failure of a network switch results in the reset of that network switch. If the article was intended for a more technical audience, that level of detail might have been included, but it wasn't, so it isn't.
[1] If it didn't, why even bother syncing the time? I find it a little hard to believe that the FCMs care about anything other than elapsed time, so all you care about is if they're all ticking at the same rate. I expect the way you detect this is by checking for time sync across the FCMs, correcting minor drift, and resetting FCMs with major drift.
>“A faulty computer will fail silent, rather than transmit the ‘wrong answer,’” Uitenbroek explained. >This approach simplifies the complex task of the triplex “voting” mechanism that compares results. > >Instead of comparing three answers to find a majority, the system uses a priority-ordered source >selection algorithm among healthy channels that haven’t failed-silent. It picks the output from the >first available FCM in the priority list; if that module has gone silent due to a fault, it moves to >the second, third, or fourth.
One part that seems omitted in the explanation is what happens if both CPUs in a pair for whatever reason performs an erroneous calculation and they both match, how will that source be silenced without comparing its results with other sources.
Put another way, the FIT (Failure in Time) value for the condition in which both CPUs in a lockstep pair perform the same erroneous calculation and still produce matching results is extremely small. That is why we selected and accepted this lockstep CPU design
but still, murphy's law applies really well in space, so who knows.
Under the 3-voting scheme, if 2 machines have the same identical failure -- catastrophe. Under the 4 distinct systems sampled from a priority queue, if the 2 machines in the sampled system have the same identical failure -- catastrophe. In either case the odds are roughly P(bit-flip) * P(exact same bit-flip).
The article only hints at the improvements of such a system with the phrasing: " simplifies the complex task", and I'm guessing this may reduce synchronization overhead or improve parallelizability. But this is a pretty big guess to be fair.
OTOH, consider that in the "pick the majority from 3 CPUs" approach that seems to have been used in earlier missions (as mentioned in the article) would fail the same way if two CPUs compute the same erroneous result.
I think the Shuttle, operating only in LEO, had more margin for error. Averaging a deep-space burn calculation is basically the same as killing the crew.
In the case of moon landings, the only truly time-critical maneuvers are the ones right before landing... and unfortunately, a lot of fairly recent moon probes have failed due to incorrect calculations, sensor measurements, logic errors, etc.
Travelling through Max-Q in Earth atmosphere on ascent is far more dangerous.
Fair enough. I don't know enough about Orion's architecture to guess at propellant reserves, and how life-or-death each burn actually is.
I asked him “how did you deal with bugs”? He chuckled and said “we didn’t have them”.
The average modern AI-prompting, React-using web developer could not fathom making software that killed people if it failed. We’ve normalized things not working well.
Low quality for a shopping cart feels fine until someone steals all the credit card numbers.
This is not to say your code should be a buggy mess, but 98% bug free when you're a SaaS product and pushing features is certainly better than 100% bug free and losing ground to competitors.
That's one thing I think is good to learn from mission critical architecture: an awareness of the impact and risk tolerance of code and bugs, which means an awareness of how the software will be used and in what context by users.
I’d love to know how often one of the FCMs has “failed silent”, and where they were in the route and so on too, but it’s probably a little soon for that.
Personally I find the project extremely messy, and kinda hate working with it.
* Logical Foundations of Cyber-Physical Systems
* Building High Integrity Applications with SPARK
* Analysable Real-Time Systems: Programmed in Ada
* Control Systems Safety Evaluation and Reliability (William M. Goble)
I am developing a high-integrity controls system for a prototype hoist to be certified for overhead hoisting with the highest safety standards and targeting aerospace, construction, entertainment, and defense.For example, the OS it seems to be running is integrity 178.
https://www.ghs.com/products/safety_critical/integrity_178_s...
Aerospace tech is not entirely bespoke anymore, plenty of the foundational tech is off the shelf.
Historically, the main difference between ICBM tech and human spaceflight tech is the payload and reentry system.
We do not know how much of the high-level architecture of the system has been specified by NASA and how much by Lockheed Martin.
I assume this means they are using a digital twin simulation inside the HPC?