Top
Best
New

Posted by PretzelFisch 5 days ago

Write-Only Code(www.heavybit.com)
30 points | 38 commentspage 2
p0w3n3d 2 hours ago|
This article assumes that AI bubble will never break, and the prices will never go up. While the major LLM providers are located in USA and China, but mostly USA I can imagine the service becoming a hostage for some political interests, and being blocked for a country or more... This also can become a weapon as creation of invisibly malicious code might be advised by LLMs or even executed by LLMs running loose... So I'd say, copying some other author which I read here, that LLM is an exoskeleton and we're not going anywhere, just being strengthened by it and sped up, and constantly held accountable for the code we together with LLM generate
ddoottddoott 3 hours ago||
Write-only blog posts.
dana321 2 hours ago|
Only read by ai crawlers to be reused in training data.
svilen_dobrev 5 days ago||
write-once read-never?

something that was not perl ;)

in ~2005 i lead a team to build horse-betting terminals for Singapore, and there server could only understand CORBA. So.. i modelled the needed protocol in python, which generated a set of specific python files - one per domain - which then generated the needed C folders-of-files. Like 500 lines of models -> 5000 lines 2nd level -> 50000 lines C at bottom. Never read that (once the pattern was established and working).

But - but - it was 1000% controllable and repeatable. Unlike current fancy "generators"..

Steinmark 3 hours ago||
If $x_{T+1}=$|mathbb(E){stitch^\top gauge^(-1)stitch]$. Lemma - The system still runs,because $pattern$ is stable ($||pattern<1$) the system remains bounded (The app doesn't crash yet) Multi-modal $NEWT decouple real time rendering (60 fps) from AI computation(<100ms) and network calls (Async latency to 5ms stitch count makes sense). The dominant eigenvector in the latent coupling.
marginalia_nu 2 hours ago|
How many cs are in the word "abacus"?
philipwhiuk 5 days ago||
> “AI writes the code” is already true inside many enterprise teams

I'm highly doubtful this is true. Adoption isn't even close to the level necessary for this to be the case.

henning 2 hours ago||
Stupid VC bullshit from someone who will not be woken up in the middle of the night by vibe coded garbage that is incredibly insecure and falls over if you look at it the wrong way.
lowsong 3 hours ago||
> LLMs are clearly a massive productivity boost for software developers, and the value of humans manually translating intent into lines of code is rapidly depreciating.

This take is so divorced from reality it's hard to take any of this seriously. The evidence continues to show that LLMs for coding only make you feel more productive, while destroying productivity and eroding your ability to learn.

logicprog 3 hours ago||
Re productivity: the METR study is seriously flawed overall, and:

1. if you disaggregate the highly aggregated data, it shows that the slowdown was highly dependent on task type, and tasks that required using documentation or novel tasks were possibly sped up, whereas ones the developers were very experienced with were slowed down, which actually matched the developers' own reports

2. developers were asked to estimate time beforehand per-task, but estimate whether they were sped up or slowed down only once, afterwards, so you're not really measuring the same thing

3. There were no rules about which AI to use, how to use it, or how much to use it, so it's hard to draw a clear conclusion

4. Most participants didn't have much experience with the AI tools they used (just prompting chatbots), and the one that did had a big productivity boost

5. It isn't an RCT.

See [1] for all.

The Anthropic study was using a task far too short to really measure productivity (30 mins), and furthermore the AI users were using chatbots, and spent the vast majority of their time manually retyping AI outputs, and if you ignore that time, AI users were 25% faster[2], so the study was not a good study to judge productivity, and the way people quote it is deeply misleading.

Re learning: the Anthropic study shows that how you use AI massively changes whether you learn and how well you learn; some of the best scoring subjects in that study were ones who had the AI do the work for them, but then explain it afterward[3].

[1]: https://www.fightforthehuman.com/are-developers-slowed-down-... [2]: https://www.seangoedecke.com/how-does-ai-impact-skill-format... [3]: https://www.anthropic.com/research/AI-assistance-coding-skil...

petetnt 2 hours ago|||
That’s the conclusion you get when you sit in the board of 20 companies where all the CEOs are telling you the same thing but you don’t understand that you are all just selling the same golden shovel to each other. Obviously this can also be backed by their own experiences too: 100% of code is written by AI, because last time they actually wrote code was in 2010.
botusaurus 3 hours ago||
Your comment is so divorced from reality...
aatd86 5 days ago|
There will be more of it where it does not matter. Maybe eventually with times. At the moment, in my experience, most systems rely on hyperlinear semantics. Especially scalable ones. Current llms cannot physically handle this at the moment. Maybe with biological or quantum (sic) computing.

But even then it is quite impressive.

Concretely in my use case, off of a manual base of code, having claude has the planner and code writer and GPT as the reviewer works very well. GPT is somehow better at minutiae and thinking in depth. But claude is a bit smarter and somehow has better coding style.

Before 4.5, GPT was just miles ahead.