Top
Best
New

Posted by pjmlp 5 hours ago

Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy(gitlab.redox-os.org)
224 points | 199 commentspage 3
emperorxanu 5 hours ago|
[flagged]
flanked-evergl 3 hours ago||
Spiritually Amish
menaerus 4 hours ago||
Let someone from the Redox team go read [1], [2], and [3]. If they still insist on keeping their position then ... well. The industry is being redefined as we speak and everyone doing the push-back are pushing against themselves really.

[1] https://www.datadoghq.com/blog/ai/harness-first-agents/

[2] https://www.datadoghq.com/blog/ai/fully-autonomous-optimizat...

[3] https://www.datadoghq.com/blog/engineering/self-optimizing-s...

P.S. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open.

duskdozer 1 hour ago||
Wouldn't expect anything else from big data collectors.
stingraycharles 4 hours ago|||
That’s such a silly take.

“Our approach is harness-first engineering: instead of reading every line of agent-generated code, invest in automated checks that can tell us with high confidence, in seconds, whether the code is correct. “

that’s literally what The whole industry has been doing for decades, and spoiler: you still need to review code! it just gives you confidence that you didn’t miss anything.

Also, without understanding the code, it’s difficult to see its failure modes, and how it should be tested accordingly.

menaerus 3 hours ago||
So you read the three-part series of blogs that are packed in details in 3 minutes after I shared the link and put yourself into a position of entitled opinion and calling my position a silly take? Sure thing.
stingraycharles 2 hours ago|||
Obviously not, I skimmed through the first two, and it’s not difficult to assess that it’s just fluff that sounds interesting but is actually not.
menaerus 2 hours ago||
Implementing a Redis and Kafka rewrite (in Rust) but with workload-aware and self-balancing JIT-like engine deployed at Datadog-scale is no fluff. You obviously have no idea what you're talking about.
grey-area 3 hours ago|||
They probably used an AI to summarise those blog posts for them and it told them with high confidence, in seconds, whether they were correct.
menaerus 1 hour ago||
Their profile generally comes up here on HN very often with Dunning-Kruger effect like comments so it makes me believe it is no AI. AI would do a better analysis, for the better or worse.
subjectsigma 3 hours ago||
> The industry is being redefined as we speak and everyone doing the push-back are pushing against themselves really.

No, they’re pushing back against a world full of even more mass surveillance, corporate oligarchy, mass unemployment, wanton spam, and global warming. It is absolutely in your personal best interest to hate AI.

baq 5 hours ago||
While I appreciate the morality and ethics of this choice, the current trend means projects going in this direction are making themselves irrelevant (don't bother quipping at how relevant redox is today, thanks). E.g. top security researches are now using LLMs to find new RCEs and local privilege escalations; no reason why the models couldn't fix these, too - and it's only the security surface.

IOW I think this stance is ethically good, but technically irresponsible.

ptnpzwqd 4 hours ago||
Even if we assume that LLMs become good enough for this to be true (some might feel that is the case already - I disagree, but that is beside the point), there is no reason why OSS maintainers should accept such outside contributions that they would need to carefully review, as it comes from an untrusted source, when they could just use the tools themselves directly. Low effort drive-by PRs is a burden with no upside.
holyra 4 hours ago||
People can choose not to use AI. This is because they think it is inevitable that they will eventually use LLMs.
lifis 5 hours ago||
Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.

What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.

What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions.

ptnpzwqd 4 hours ago||
The problem is the increasing review burden - with LLMs it is possible to create superficially valid looking (but potentially incorrect) code without much effort, which will still take a lot of effort to review. So outright rejecting code that can identified as LLM-generated at a glance, is a rough filter to remove the lowest effort PRs.

Over time this might not be enough, though, so I suspect we will see default deny policies popping up soon enough.

duskdozer 5 hours ago|||
>Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.

Why not?

lifis 5 hours ago||
Because it takes a massive amount of developer work (perhaps more than anything else), and it's very unlikely they either have the ability to attract enough human developers to be able to do it without LLM assistance.

Not to mention that even finding good developers willing to develop without AI (a significant handicap, even more so for coding things like an OS that are well represented in LLM training) seems difficult nowadays, especially if they aren't paying them.

lpcvoid 3 hours ago|||
>Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.

Humans have been doing this for the better parts of 5 decades now. Don't assume others rely on LLMs as much as you do.

>Not to mention that even finding good developers willing to develop without AI (a significant handicap, even more so for coding things like an OS that are well represented in LLM training) seems difficult nowadays, especially if they aren't paying them.

I highly doubt that. In fact, I'd take a significant pay cut to move to a company that doesn't use LLMs, if I were forced to use them in my current job.

holyra 4 hours ago||||
The LLM has brainwashed so many devs that they now think they are nothing without it.
vladms 3 hours ago||
That's an optimistic view. Maybe they really are 10x slower on any task without a LLM.
usrbinbash 4 hours ago||||
> Because it takes a massive amount of developer work

You know what else takes "a massive amount of developer work"?

"any LLM-generated code must be reviewed by a good programmer"

And this is the crux of the matter with using LLMs to generate code for everything but really simple greenfield projects: They don't really speed things up, because everything they produce HAS TO be verified by someone, and that someone HAS TO have the necessary skill to write such code themselves.

LLMs save time on the typing part of programming. Incidentially that part is the least time consuming.

lifis 4 hours ago|||
The submitter is supposed to be the good programmer; if not, then maintainers may or may not review it themselves depending on the importance of the feature.

And yes of course they need to be able to write the code themselves, but that's the easy part: any good developer could write a full production OS by themselves given access to documentation and literature and an enormous amount of time. The problem is the time.

duskdozer 3 hours ago|||
Well, assuming you care about verification, of course. If it's got that green checkmark emoji, it ships!
usrbinbash 5 hours ago|||
> Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.

Every single production OS, including the one you use right now, was made before LLMs even existed.

> What makes sense if that of course any LLM-generated code must be reviewed by a good programmer

The time of good programmers, especially ones working for free in their spare time on OSS projects, is a limited resource.

The ability to generate slop using LLMs, is effectively unlimited.

This discrepancy can only be resolved in one way: https://itsfoss.com/news/curl-ai-slop/

lifis 4 hours ago||
There are only 4 successful general purpose production OSes (GNU/Linux, Android/Linux, Windows, OS X/iOS) and only one of those made by the open source community (GNU/Linux).

And a new OS needs to be significantly better than those to overcome the switching costs.

swiftcoder 4 hours ago|||
> There are only 4 successful general purpose production OSes

Feel like you are using a very narrow definition of "success" here. Is BSD not successful? It is deployed on 10s of millions of routers/firewalls/etc in addition to being the ancestor of both modern MacOS and PlaystationOS...

usrbinbash 4 hours ago|||
None of this counters the argument I made above :-)
lifis 4 hours ago||
Just because they have been made before LLMs doesn't mean it can be done again, since there was just one success (GNU/Linux) and that success makes it much harder for new OSes since they need to better then it
eqvinox 4 hours ago||
Well, by this logic there have been 0 successful OSes made with LLMs so far...
sh4zb0t 4 hours ago|||
what a retarded view. All OSes you use today were developed without AI
dagi3d 5 hours ago||
they already have...
dev_l1x_be 2 hours ago||
In my experience with the right set of guardrails LLMs can deliver high quality code. One interesting aspect is doing security reviews and formal verification with agents that is proven to be very useful in practice.

https://www.datadoghq.com/blog/ai/harness-first-agents/

qsera 2 hours ago|
I think clients who care about getting good software will eventually require that LLMs are not directly used during the development.

I think one way to compare the use of LLMs is that it is like comparing a dynamically typed language with a functional/statically typed one. Functional programming languages with static typing makes it harder to implement the solution without understanding and developing an intuition of the problem.

But programming languages with dynamic typing will let you create a (partial) solutions with a lesser understanding the problem.

LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..

If I am a client who wants reliable software, then I want an competent programmer to

1. actually understand the problem,

2. and then come up with a solution.

The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs.