Top
Best
New

Posted by dmitrybrant 5 days ago

Using Claude Code to modernize a 25-year-old kernel driver(dmitrybrant.com)
916 points | 319 commentspage 2
rmoriz 5 days ago|
I was banned from an OpenSource project [1] recently because I suggested a bug fix. Their „code of conduct“ not only prevents PRs but also comments on issues with information that was retrieved by any AI tool or resource.

Thinking about asking Claude to reimplement it from scratch in Rust…

[1] https://codeberg.org/superseriousbusiness/gotosocial/src/bra...

lordhumphrey 5 days ago||
> 2. We will not accept changes (code or otherwise) created with the aid of "AI" tooling. "AI" models are trained at the expense of underpaid workers filtering inputs of abhorrent content, and does not respect the owners of input content. Ethically, it sucks.

Do you disagree with some part of the statement regarding "AI" in their CoC? Do you think there's a fault in their logic, or do you yourself personally just not care about the ethics at play here?

I find it refreshing personally to see a project taking a clear stance. Kudos to them.

Recently enjoyed reading the Dynamicland project's opinion on the subject very much too[0], which I think is quite a bit deeper of an argument than the one above.

Ethics seems to be, unfortunately, quite low down on the list of considerations of many developers, if it factors in at all to their decisions.

[0] https://dynamicland.org/2024/FAQ/#What_is_Realtalks_relation...

KingMob 5 days ago|||
Setting aside the categories of art and literature, training LLMs on FOSS software seems aligned with the spirit, if not the letter, of the licenses.

It does nothing to fix the issues of unpaid FOSS labor, though, but that was a problem well before the recent rise of LLMs.

lordhumphrey 3 days ago|||
The "spirit" of FOSS licences the various sorts of ideas that lead to the GNU project and the FSF in the 80s, and all that user-freedom-fighting heritage.

I think even critics of the GNU project and the FSF would have to admit that as historically accurate. I can only presume, then, that your comment is based on a lack of awareness of the history of FOSS licencing.

Perhaps a read of this would be a good start:

https://en.wikipedia.org/wiki/GNU_General_Public_License

creesch 4 days ago||||
> FOSS software seems aligned with the spirit, if not the letter, of the licenses.

Yeah, only if you look at permissive licenses like MIT and Apache, it most certainly doesn't follow the spirit of other licenses.

vbarrielle 5 days ago|||
I'm not sure it's very well aligned with the spirit of copyleft licenses.
DrewADesign 5 days ago||||
[flagged]
incr_me 5 days ago|||
> "AI" models are trained at the expense of underpaid workers filtering inputs of abhorrent content, and does not respect the owners of input content. Ethically, it sucks.

These ethics are definitely derived from a profit motive, however petty it may be.

AlecSchueler 5 days ago||
You're assuming "respect" means "payment" but it could be as simple as "recognition."
pjc50 5 days ago|||
Conversely, if the only motivation is profit, that's no ethics at all.

(and of course without non-profit motivations, none of the open source ecosystem would exist!)

DrewADesign 4 days ago||
Yeah that’s what I was getting at
wordofx 5 days ago|||
I disagree with their CoC on AI. There are so many projects which are important and don’t let you contribute or make the barrier to entry so hard, and so you do best effort to raise a detailed bug description for it to sit there for 14 years or them to tell you to get fucked. So anyone who complains about AI isn’t worth the time and day and I support not getting paid as much if at all.
pluto_modadic 5 days ago|||
you disobeyed a code of conduct? that's not a good look.
QuadmasterXLII 5 days ago|||
That must be so hard for you.
rmoriz 5 days ago||
The bugs are on them. I‘ve fixed them in my fork but of course I‘ll migrate to a non-discriminating alternative.
skydhash 4 days ago||
Your fork works, so why are you so unhappy? You can always publish you diff to help other people if you really want to do so.
rmoriz 4 days ago||
I don‘t want others to get trappend, hence I‘ve unpublished my fixes. I‘ll also migrate to another software as I clearly have no time dealing with such exclusive politics. There is no point in discussing with stubborn and brain-washed people, the only solution is to move forward and warn others.

That’s the reason I posted my comment.

sreekanth850 5 days ago|||
/ Suddenly i saw this: //Update regarding corporate sponsors: we are open to sponsorship arrangements with organizations that align with our values; see the conditions below.// They should know that beggars cant be choosers.
3836293648 5 days ago|||
That's not begging. That's a premptive rejection for people who think they can take control of the project through money.
driverdan 4 days ago|||
It's pretty funny that they say "We are not interested in input from right-wingers, nazis, ... or capitalists." and then say they're open to corporate sponsorships. If they want to be consistent they'd only be open to government or individual sponsors, not corps.
bgwalter 4 days ago|||
There is no "from scratch" for "AI". Claude will read the original, launder it, strip the license and pass it off as its own work.
TuxSH 4 days ago||
Indeed, LLMs cannot do truly novel thinking, and the laundering analogy is spot-on.

However they're able to do more than just regurgitating code, I can have them explain to me the underlying (mathematical or whatever) concept behind the code and write new code from scratch myself, with that knowledge.

Can/should this new code be considered as derivative work, if the underlying principles were already documented in literature?

wizzwizz4 4 days ago||
They can regurgitate explanations as well as code. I'd strongly recommend doing actual research: you'll find better (less-distorted, better laid out, more complete) explanations.
ok123456 4 days ago|||
"You used AI!" is now being weaponized by project maintainers who don't want to accept contributions, regardless of how innocuous.

A large C++ emulator project was failing to build with a particular compiler with certain Werror's enabled. It came down to reordering a few members (that matters in C++) and using the universal initializer syntax in a few places. It was a +3-3 diff. I got lambasted. One notoriously hostile maintainer accused me of making AI slop. The others didn't understand why the order mattered and referred to it as "churn."

encom 4 days ago||
That particular CoC is a colossal red flag that the maintainers are utterly deranged. This might actually be the worst CoC I've ever seen. Any CoC is a red flag, but people often get pressured into it, so it's a sliding scale.
csmantle 5 days ago||
It's a good example of a developer who knows what to do with and what to expect from AI. And a healthy sprinkle of skepticism, because of which he chose to make the driver a separate module.
tedk-42 5 days ago||
Really is an exciting future ahead. So many lost arts that don't need a dedicated human to relearn deep knowledge required to make an update.

A reminder though these LLM calls cost energy and we need reliable power generation to iterate through this next tech cycle.

Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)

rvz 5 days ago||
> Really is an exciting future ahead. So many lost arts that don't need a dedicated human to relearn deep knowledge required to make an update.

You would certainly need an expert to make sure your air traffic control software is working correctly and not 'vibe coded' the next time you decide to travel abroad safely.

We don't need a new generation who can't read code and are heavily reliant on whatever a chat bot said because: "you're absolutely right!".

> Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)

Useful enough for Stripe to building their own blockchain and even that and the rest of them are more energy efficient than a typical LLM cycle.

But the LLM grift (or even the AGI grift) will not only cost even more than crypto, but the whole purpose of its 'usefulness' is the mass displacement of jobs with no realistic economic alternative other than achieving >10% global unemployment by 2030.

That's a hundred times more disastrous than crypto.

peteforde 5 days ago||
Have you ever read David Graeber's Bullshit Jobs? Because if not, you really should.
konfusinomicon 5 days ago||
yes they do! those are the humans that pass down those lost arts even if the audience is a handful. to trust an amalgamation of neurally organized binary carved intricately into metal with deep and often arcane knowledge and the lineage of lessons that produced it is so absurd that if a catastrophe that destroyed life as we know it did occur, we deserve our fate of devolution back to stone tools and such.
sedatk 5 days ago||
Off-topic, but I wish Linux had a stable ABI for loadable kernel modules. Obviously the kernel would have to provide shims for internal changes because internal ABI constantly evolves, so it would be costly and the drivers would probably run slower over time. Yet, having the ability to use a driver from 15 years ago can be a huge win at times. That kind of compatibility is one of the things I love about Windows.
fruitworks 5 days ago|
I think this would be terrible for the driver ecosystem. I don't want to run 15 year old binary blob drivers because they technicially still work.

Just get the source code published into mainline.

sedatk 4 days ago|||
Ideally, yes. But, obviously not possible for every driver in existence.
dd_xplore 4 days ago|||
And publishing shitty code invites wrath from Linus
mintflow 5 days ago||
When I was port fd.io vpp to apple platform for my App, there is code that's implement coroutine in inline ASM in a C file but not in Apple supported syntax, I have succesfully use Claude web interface to get the job done (Claude code was not yet released), though as like in this article, I have strong domain specific knowledge to provide a relevant prompt to the code.

Nowadays I heavily rely Claude Code to write code, I start a task by creating a design, then I write a bunch of prompt which cover the design details and detail requirements and interaction/interface with other compoments. So far so good, it boost the productivity much.

But I am really worrying or still not be able to believe this is the new norm of coding.

aussieguy1234 5 days ago||
AI works better when it has an example. In this case, all the code needed for the driver to work was already there as the example. It just had to update the code to reflect modern kernel development practices.

The same approach can be used to modernise other legacy codebases.

I'm thinking of doing this with a 15 year old PHP repo, bringing it up to date with Modern PHP (which is actually good).

brainless 5 days ago||
Empowering people is a lovely thing.

Here the author has a passion/side project they have been on for a while. Upgrading the tooling is a great thing. Community may not support this since the niche is too narrow. LLM comes in and helps in the upgrade. This is exactly what we want - software to be custom - for people to solve their unique edge cases.

Yes author is technical but we are lowering the barrier and it will be lowered even more. Semi technical people will be able to solve some simpler edge cases, and so one. More power to everyone.

wg0 4 days ago||
I have used Gemeni and OpenAI models too but at this point - Sonnet is next level undisputed King.

I was able to port a legacy thermal printer user mode driver from legacy convoluted JS to pure modern Typescript in two to three days at the end of which printer did work.

Same caveats apply - I have decent understanding of both languages specifically various legacy JavaScript patterns for modularity to emulate other language features that don't exist in JavaScript such as classes etc.

piskov 4 days ago|
Check swe-bench results but for C#.

It’s literally pathetic how these things just memorize, not achieve any actual problem-solving

https://arxiv.org/html/2506.12286v3

antonvs 4 days ago||
You've misunderstood the study that you linked. LLMs certainly memorize, and this can certainly skew benchmarks, but that's not all they do.

Anyone with experience with LLMs will have experienced their actual problem solving ability, which is often impressive.

You'd be better off learning to use them, than speculating without basis about why they won't work.

piskov 4 days ago||
What exactly did I misunderstand?

Also “learn to use them” feels you’re holding it wrong vibes.

See also

https://machinelearning.apple.com/research/illusion-of-think...

wg0 4 days ago|||
You did not misunderstand anything. Sure, LLMs have no cognitive abilities. So even with widely used languages, they do hit the wall and need lots of hand holding.
antonvs 4 days ago|||
The study doesn't show that "these things just memorize, not achieve any actual problem-solving."

Re learning to use them, I'm more suggesting that you should actually try to use them, because if you believe that they don't "achieve any actual problem-solving," you clearly haven't done so.

There are plenty of reports in this thread alone about how people are using them to solve problems. For coding applications, most of us are working on proprietary code that the LLMs haven't been trained on, yet they're able to exhibit strong functional understanding of large, unfamiliar codebases, and they can correctly solve many problems that they're asked to solve.

The illusion of thinking paper you linked seems to imply another misunderstanding on your part. All that's pointing out is a fact that's fairly obvious to anyone paying attention: if you use a text generation model to generate the text of supposed "thoughts", those aren't necessarily going to reflect the model's internal functioning.

Functionally, the models can clearly understand almost arbitrary domains and solve problems within them. If you want to claim that's not "thinking", that's really just semantics, and doesn't really matter except philosophically. The point is their functional capabilities.

globular-toast 5 days ago||
I don't think we really need an article a day fawning over LLMs. This is what they do. Yep.

Only thing I got from this is nostalgia from the old PC with its internals sprawled out everywhere. I still use desktop PCs as much as I can. My main rig is almost ten years old and it's been upgraded countless times although is now essentially "maxed out". Thank god for PC gamers, otherwise I'm not sure we'd still have PCs at all.

athrowaway3z 5 days ago|
> so I loaded the module myself, and iteratively pasted the output of dmesg into Claude manually,

One of the things that has Claude as my goto option is its ability to start long-running processes, which it can read the output of to debug things.

There are a bunch of hacks you could have used here to skip the manual part, like piping dmesg to a local udp port and having Claude start a listener.

mattmanser 5 days ago|
I think that's the thing holding a lot of coders back on agentic coding, these little tricks are still hard to get working. And that feedback loop is so important.

Even something simple like getting it to run a dev server in react can have it opening multiple servers and getting confused. I've watched streams where the programmer is constantly telling it to use an already running server.

More comments...