Posted by prakhar897 1/27/2026
What surprised me was how much the ugly first version taught me that planning never could. You learn what users actually care about (often not what you expected), which edge cases matter in practice, and what "good enough" looks like in context.
The hardest part is giving yourself permission to ship something you know is flawed. But the feedback loop from real usage is worth more than weeks of hypothetical architecture debates.
Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates." - "What actually worked" - "This hits close to home" - "Where it really shines is the tedious stuff - writing tests for edge cases, refactoring patterns across multiple files, generating boilerplate that follows existing conventions."
> Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates."
Wow it's obvious in the full comment history. What is the purpose for this stuff? Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing? On Twitter I already have scroll down to find the one human reply on many posts.
And when the bots get a bit better (or people get less lazy prompting them, I'm pretty sure I could prompt to avoid this classic prose style) we'll have no chance of knowing what's a bot. How long until the majority of the Internet be essentially a really convincing version of r/SubredditSimulator? When I stop being able to recognize the bots, I wonder how I'll feel. They would probably be writing genuinely helpful/funny posts, or telling a touching personal story I upvote, but it's pure bot creative writing.
Russia and Israel are known to run full time operations doing this for well over a decade. Twitter by their own account, 25% of users are/were bots back in 2015 (their peak user year). Even here on HN if you go look at the most trafficked Israel/Palestine threads, there are lots of people complaining about getting modded into oblivion, turning the conversation into neutral/pro israel, and silencing negative comments via a ghost army of modders.
This particular piece is LinkedIn “copy pasta” with many verbatim or mildly variant copies.
Example: https://www.linkedin.com/posts/chriswillx_preparing-to-do-th...
And in turn, see: https://strangestloop.io/essays/things-that-arent-doing-the-...
Relatedly, LLMs clearly picked the "LinkedIn influencer" style up.
My guess is some cross-over between those who write this way on LinkedIn and those who engage with chatbot A/B testing or sign up for the human reinforcement learning / fine tuning / tagging jobs, training in a preference for it.
I understand that it's not the main point in your comment (you're trying to determine if the parent comment was written using an LLM), but yes, we do exist: I've spent years planning personal projects that remain unimplemented. Don't underestimate the power of procrastination and perfectionism. Oliver Burkeman ("Four Thousand Weeks", etc.) could probably explain that dynamic better than me.
My struggle is having enough patience to do any planning before I start building. As soon as there's even the remote hint of a half-baked idea in my head, it's incredibly tempting to just start building and figure out stuff as I go along.
I resist working like that because I am mega ignorant and I know I will encounter problems that I won't recognize until I get to them.
But, I also HATE having to rework my projects because of something I overlooked.
My (attempted) solution is to slog through a chat with an AI to build a Project Requirements Document and to answer every question it asks about my blindspots. It mostly helps build stuff. And sometimes the friction prevents me from overloading myself with more unfinished projects!
We currently live in the very thin sliver of time where the internet is already full of LLM writing, but where it's not quite invisible yet. It's just a matter of time before those Dead Internet Theory guys score another point and these comments are indistinguishable from novel human thought.
I don't think it will become significantly less visible⁰ in the near future. The models are going to hit the problem of being trained on LLM generated content which will cause the growth in their effectiveness quite a bit. It is already a concern that people are trying to develop mitigations for, and I expect it to hit hard soon unless some new revolutionary technique pops up¹².
> those Dead Internet Theory guys score another point
I'm betting that us Habsburg Internet predictors will have our little we-told-you-so moment first!
--------
[0] Though it is already hard to tell when you don't have your thinking head properly on sometimes. I bet it is much harder for non-native speakers, even relatively fluent ones, of the target language. I'm attempting to learn Spanish and there is no way I'd see the difference at my level in the language (A1, low A2 on a good day) given it often isn't immediately obvious in my native language. It might be interesting to study how LLM generated content affects people at different levels (primary language, fluent second, fluent but in a localised creole, etc.).
[1] and that revolution will likely be in detecting generated content, which will make generated content easier to flag for other purposes too, starting an arms race rather than solving the problem overall
[2] such a revolution will pop up, it is inevitable, but I think (hope?) the chance of it happening soon is low
Remember back in the early 2000's when people would photoshop one animals head onto another and trick people into thinking "science has created a new animal". That obviously doesn't work anymore because we know thats possible, even relatively trivial, with photoshop. I imagine the same will happen here, as AI writing gets more common we'll begin a subconscious process of determining if the writer is human. That's probably a bit unfairly taxing on our brains, but we survived photoshop I suppose
The obviously fake ones were easy to detect, and the less obvious ones took some some sleuthing to detect. But the good fakes totally fly under the radar. You literally have no idea how much of the images you see are doctored well because you can't tell.
Same for LLMs in the near future (or perhaps already). What will we do when we'll realize we have no way of distinguishing man from bot on the internet?
> What will we do when we'll realize we have no way of distinguishing man from bot on the internet?
The idea is this is a completely different scenario if we're aware of this being a potential problem versus not being at all aware of it. Maybe we won't be able to tell 100% of the time, but its something which we'll consider.
Not sure about this user specifically, but interesting that a lot of their comments follow a pattern of '<x> nailed it'
Psy-ops, astroturfing, now LLM slop.
Ironically, I see this very often with AI/vibe coding, and whilst it does happen with traditional coding too, it happens with AI to an extreme degree. Spend 5 minutes on twitter and you'll see a load of people talking about their insane new vibe coding setup and next to nothing of what they're actually building
Probably. I've been known to spend weeks planning something that I then forget and leave completely unstarted because other things took my attention!
> Commenter's history is full of 'red flags'
I wonder how much these red flags are starting to change how people write without LLMs, to avoid being accused of being a bot. A number of text checking tools suggested replacing ASCII hyphens with m-dashes in the pre-LLM-boom days¹ and I started listening to them, though I no longer do. That doesn't affect the overall sentence structure, but a lot of people jump on m-/n- dashes anywhere in text as a sign, not just in “it isn't <x> - it is <y>” like patterns.
It is certainly changing what people write about, with many threads like this one being diverted into discussing LLM output and how to spot it!
--------
[1] This is probably why there are many of them in the training data, so they are seen as significant by tokenisation steps, so they come out of the resulting models often.
> "A typo or two also helps to show it’s not AI (one of the biggest issues right now)."
The best marketing is usually brief.
There is so much to be learned about a problem - and programming in general - by implementing stuff and then refactoring it into the ground. Most of the time the abstractions I think up at first are totally wrong. Like, I imagine my program will model categories A, B and C. But when I program it up, the code for B and C is kinda similar. So I combine them, and realise C is just a subset of B. And sometimes then I realise A is a distinct subset of B as well, and I rewrite everything. Or sometimes I realise B and C differ in one dimension, and A and B in another. And that implies there's a fourth kind of thing with both properties.
Do this enough and your code ends up in an entirely unrecognisable place from where you started. But very, very beautiful.
Fred Brooks, author of “The Mythical Man Month” wrote an essay called “Plan to Throw One Away” in 1975.
He argues much what you’ve described.
Of course, in reality we seldom do actually throw away the first version. We’ve got the tools and skills and processes now to iterate, iterate, iterate.
Of course you’ll also maintain the satisfaction of doing something well.
Nice statement.
I think there is another equally pervasive problem: balancing between shipping something and strategizing a complete "operating system" but in the eyes of OTHER stakeholders.
I'm in this muck now. Working with an insurance co that's building internal tools. On one had we have a COO that wants an operating model for everything and what feels like strategy/process diagrams as proof of work.
Meanwhile I am encouraging not overplanning and instead building stuff, shipping, seeing what works, iterating, etc.
But that latter version causes anxiety as people "don't know what you're doing" when, in fact, you're doing plenty but it's just not the slide-deck-material things and instead the tangible work.
There is a communication component too, of course. Almost an entirely separate discipline.
I've never arrived at acceptable comfort on either side of this debate but lean towards "perfect is the enemy of good enough"
The most important aspect of software design, at least with respect to software that you intend not to completely throw away and will be used by at least one other person, is that it is easy to change, and remains easy to change.
Whether it works properly or not, whether it's ugly and hacky or not, or whether it's slow... none of that matters. If it's easy to change you can fix it later.
Put a well thought out but minimal API around your code. Make it a magic black box. Maintain that API forever. Test only the APIs you ship.
The plumbing also needs iteration and prototyping, but sound, forward looking decisions at the right time pay dividends later on. That includes putting extra effort and thinking into data structures, error handling, logging, naming etc. rather earlier than later. All of that stuff makes iterating on the higher levels much easier very quickly.
One of my friends calls it "development-driven development".
There is a difference between shipping something that works but is not perfect, and shipping something knowingly flawed. I’m appalled at this viewpoint. Let’s hope no life, reputation or livelihood depends on your software.
"I spent weeks planning" -- using the terminology from that book: No, you didn't spend weeks planning, you spent weeks building something that you _thought_ was a plan. An actual plan would give you the information you got from actually shipping the thing, and in software in particular "a model" and "the thing" look very similar, but for buildings and bridges they are very different.
Not saying this is you, but it's so easy for people to give up and sour into hyper-pragmatists competing to become the world's worst management. Their insecurities take over and they actively suppress anyone trying to do their job by insisting everything be rewritten by AI, or push hard for no-code solutions.
Do a thing. Write rubbish code. Build broken systems. Now scale scale. Then learn how to deal with the pattern changing as domains specific patterns emerge.
I watched this at play with a friend's startup. He couldn't get response times within the time period needed for his third party integration. After some hacking, we opted to cripple his webserver. Turns out that you can slice out mass amounts of the http protocol (and in that time server overhead) and still meett all of your needs. Sure it needs a recompile - but it worked and scaled, far more then anything else they did. Their exit proved that point.
This one works for me, and I've learned it from a post on HN. Whenever I feel stuck or overthink how to do something, just do it first - even with all the flaws that I'm already aware of, and if it feels almost painful to do it so badly. Then improve it a bit, then a bit, then before I know it a clear picture start to emerge... Feels like magic.
Got me through many a rough spot.
if youre worried about doing it well, youre a step or two ahead of where you need to be
I think I've reached the "make it fast" bit in my career twice. Most projects are considered as ready after "make it work" =)
Dan Harmon's advice on writer's block: https://www.reddit.com/r/Screenwriting/comments/5b2w4c/dan_h...
>You know how you suck and you know how everything sucks and when you see something that sucks, you know exactly how to fix it, because you're an asshole. So that is my advice about getting unblocked. Switch from team "I will one day write something good" to team "I have no choice but to write a piece of shit" and then take off your "bad writer" hat and replace it with a "petty critic" hat and go to town on that poor hack's draft and that's your second draft.
"The Gap" by Ira Glass: https://www.reddit.com/r/Screenwriting/comments/c98jpd/the_g...
>Your taste is why your work disappoints you... it is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions.*
'“One day, I’m gonna write that novel.” Pal? You better start tomorrow morning because the right time never happens. It’s when you boldly determine it. It’s like running on a rainy day. You’re fine once you get out there. The only difficulty is getting off the couch when you lace your shoes up.'
I learned this the bad way, but now I just lie and say it doesn't work until it's good enough for me
Remarkably common, but not inevitable. Thankfully there's plenty of workplaces which don't look like this.
And yeah, lying is certainly one way to get work done in a bad organisation. I'd much rather - if at all possible - to find and fix the actual problem.
If you don't ship it in time it's also your fault
This is bound to happen with any company that needs to deliver to clients. Sales are incentivized to sell at all cost, even if the product is not there yet.
Personally I’ve never felt blamed for bugs that made it into production. I’ve felt responsible for sure, but I’ve never been blamed by others in the business. And I have seen sales people utterly chewed out for selling features we haven’t implemented without asking engineering. It all really depends on where you work, and what the culture is like.
If you hate it there, you don’t have to stay. Not every job will be like that.
I hate this, but seems to be fairly normal practice.
In my own work, this often looks like writing the quick and dirty version (alpha), then polishing it (beta), then rewrite it from scratch with all the knowledge you gained along the way.
The trick is to not get caught up on the beta. It's all too tempting to chase perfection too early.
Funny how these things when done by a human is a positive and when done by an LLM is a negative. According to all the anti-llm experts... Humans generate perfect code on the first pass every time and it's only LLMs that introduce bad implementations. And this isn't a callout on this user in specific. It's a generalization to the anti-ai sentiment on HN. If incremental improvement works, incremental improvement works.
> Humans generate perfect code on the first pass every time and it's only LLMs that introduce bad implementations.
That's not what the "anti-llm experts" are saying at all. If you think of LLMs as "bad first draft" machines, then you'll likely be successful in finding ways to use LLMs.
But that's not what is being sold. Atman and Amodei are not selling "this tool will make bad implementations that you can improve on". They are selling "this tool will replace your IT department". Calling out that the tool isn't capable of doing that is not pretending that humans are perfect by comparison.
[0]: https://strangestloop.io/essays/things-that-arent-doing-the-...
The contents are so similar, that it cannot be coincidence. It really seems like the author of this blog simply plagiarized the strangestloop post without referring to it at all...
This is a tasteless copy.
They'd love to talk about problems, investigate them from all angles, make plans on how to plan to solve the problem, identify who caused it or how to blame for it, quantify how much it costs us or how much money we could make from solving it, everything and anything except actually doing something about it.
It was never about doing the thing.
Somewhat related, I've learned that when you're the one who ends up doing the thing, it's important to take advantage of that. Make decisions that benefit you where you have the flexibility.
specially the middle managers i.e engineering managers, senior engineering manager, director of engineering duh duh
there's less coordination to do - to keep managers up to date.
the most functional software orgs out there - don't have managers
At work we built something from a 2-page spec in 4 months. The competing team spent 8 months on architecture docs before writing code. We shipped. They pivoted three times and eventually disbanded.
Planning has diminishing returns. The first 20% of planning catches 80% of the problems. Everything after that is usually anxiety dressed up as rigor.
The article's right about one thing: doing it badly still counts. Most of what I know came from shipping something embarrassing, then fixing it.
"Preparation" isn't mentioned explicitly, but by my reading it would come firmly under "is not doing the thing".
How do you not be "toxic" after that? How do you retain a chipper attitude when you know for a rock-solid certainty that even if the project is successful it's likely by accident?
From the Red Dwarf book and quoted previously:
Or if you want another way of thinking about it, code isn't only useful for deployment. Its also a tool you can use during the planning process to learn more about the problem you're trying to solve. When planning, the #1 killer is unknown unknowns. You can often discover a lot of them by building a super simple prototype.
Pivoting to zero-planning, would also have a basket of flaws.
The way to break through that is indeed to start doing. Forget about the edge cases. Handle the happy path first. Build something that does enough to deliver most of the value. Then refine it; or rebuild it.
Seriously. The cost of prototyping is very low these days. So try stuff out and learn something. Don't be afraid to fail.
One reason LLMs are so shockingly effective for this is that they don't do analysis paralysis; they start doing right away. The end results aren't always optimal or even good but often still good enough. You can optimize and refine later. If that is actually needed. Worst case you'll fail to get a useful thing but you'll have a lot better understanding of the requirements for the next attempt. With AI the sunk cost is measured in tokens. It's not free. But also not very expensive. You can afford to burn some tokens to learn something.
A good rule is to not build a framework or platform for anything until you've built at least three versions of the type of thing that you would use it for. Anything you build before that is likely to be under and overengineered in exactly the wrong places. These places make themselves clear when you build a real system.
Good enough is a self limiting fallacy.
A prototype failing to attract fans doesn't prove a lack of a market for the job the prototype attempts to perform. It only proves the prototype, as it stands, lacks something.
Beware quitting early. All good builders do.
1. https://strangestloop.io/essays/things-that-arent-doing-the-...
In the GenAI era, "doing the thing badly without planning" has become so easy that some counterweight is needed.
The characters in the book are quick to cut non-productive discussions short, but it feels like the feel good discussions around "the thing" are about as far as many people want to go these days.