Top
Best
New

Posted by speckx 20 hours ago

What we lost the last time code got cheap(www.poppastring.com)
115 points | 112 commentspage 2
cicko 18 hours ago|
What is wrong with using LLMs to analyze and explain code? Am I missing something? Before writing code, this is an even easier task to accomplish using AI.
_diyar 19 hours ago||
I think a huge gap in the market today is documentation that is both easy for humans to navigate and understand, but also readily ingestible for agents.
allthetime 19 hours ago|
Self generating docs based on docstring comments are great. LLMs are capable of generating architectural overview docs from these. What more do you need?
sanderjd 18 hours ago||
There is something to this, but to the concluding paragraph: I think these tools already are extremely good at helping us understand code, in addition to helping us generate it.
gojomo 19 hours ago||
The context of when that previous experience - Heartland outsourcing to India – happened would be helpful. The 90s? The 00s? The 10s?
lamename 19 hours ago|
The link in the article that is right near the words you're talking about links to a wikipedia page that says the book is from 2005. So I conclude it was 2005 or soon after
0xjeffro 17 hours ago||
Coding agent to me, means shifting my brain from memory-bound to compute-bound
htx80nerd 19 hours ago||
>The cost of producing code has collapsed. AI tools can generate functional, adequate, perfectly average code at a speed and cost that would have been unimaginable even five years ago. And like the outsourcing wave of the early 2000s, the economics are real and rational. Nobody is wrong for using these tools. The code they produce is often fine. It works. It passes tests. It might ship as-is.

After using AI for months (Claude, Gemini, ChatGPT) it is extremely rare for their code to work 'as is' first shot and almost always requires several iterations and cleaning up edge-cases.

When it does work 'first shot' it's usually when it's transferring existing working code to a new project which is slightly different.

simonw 19 hours ago||
Have you tried the "use red/green TDD" trick?

I believe that increases the chances of one-shot code working, though it's also possible that it did that against Opus 4.5 and isn't necessary against Opus 4.7 but I haven't spotted the difference yet.

sanderjd 17 hours ago||
Yeah this is the way. Thinking about how it will verify that it has done the right thing is the key. This been be set up at the AGENTS.md level.

Very simple things like: "Write tests and make sure they pass." "Run lint after each change." "Write API docs in XYZ format."

In my experience, they are very good at fixing things they've done wrong after discovering them during those kinds of steps.

sanderjd 17 hours ago|||
Weird, this isn't my experience at all (mostly writing Python lately). Granted, it usually doesn't implement things exactly the way I want them to be implemented, and I iterate a lot on that. But I think it's been like a year? at least six months, since the code didn't work on the first try.
bluebands 18 hours ago||
try gpt-5.5-xhigh fast in codex mac app, preferably with TDD and /goal, with a clearly defined end result

it's unbelievable, it will do the iterations for you, it will easily work 12 hours straight until it's a good output

fdvlol 3 hours ago||
[flagged]
synapsehire 18 hours ago|
[flagged]