Posted by acnops 14 hours ago
I took a look at the project and it was a 100k+ LoC vibe-coded repository. The project itself looked good, but it seemed quite excessive in terms of what it was solving. It made me think, I wonder if this exists because it is explicitly needed, or simply because it is so easy for it to exist?
It's fair to give the audience a choice to learn about an AI-created product or not.
If I used LLMs to generate a few functions would I be eligible for it? What constitutes "built this with no/ minimal AI"?
Maybe we should have a separate section for 80%+ vibe coded / agent developed.
As dang posted above, I think it's better to frame the problem as "influx of low quality posts" rather than framing policies having to do explicitly with AI. I'm not sure I even know what "AI" is anymore.
So in future everything’s gonna be “agentic”, (un)fortunately.
Everytime I write about it, I feel like a doomsayer.
Anthropic admits that LLM use makes brain lazy.
So as we forgot remembering phone numbers after Google and mobile phones came, it will be probably with coding/programming.
One is where the human has a complete mental map of the product, and even if they use some code generating tools, they fully take responsibility for the related matters.
And there is another, emerging category, where developers don't have a full mental map as it was created by an LLM, and no one actually understands how it works and what does not.
I believe these are two categories that are currently merged in one Show HN, and if in the first category I can be curious about the decisions people made and the solutions they chose, I don't give a flying fork about what an LLM generated.
If you have a 'fog of war' in your codebase, well, you don't own your software, and there's no need to show it as yours. Same way, if you had used autocomplete, or a typewriter in the time of handwriting, and the thinking is yours, an LLM shouldn't be a problem.
I work with a large number of programmers who don't use AI and don't have an accurate mental map for the codebases they work in...
I don't think AI will make these folks more destructive. If anything, it will improve their contributions because AI will be better at understanding the codebase than them.
Good programmers will use AI like a tool. Bad programmers will use AI in lieu of understanding what's going on. It's a win in both cases.
Are the tokens to write out design documentation and lots of comments too expensive or something? I’m trying to figure out how an LLM will even understand what they wrote when they come back to it, let alone a human.
You have to reify mental maps if you have LLM do significant amounts of coding, there really isn’t any other option here.
"Oh, this library just released a new major version? What a pity, I used to know v n deeply, but v n+1 has this nifty feature that I like"
It happened all the time even as a solo dev. In teams, it's the rule, not the exception.
Vibing is just a different obfuscation here.
When you upgrade a library, you made that decision — you know why, you know what it does for you, and you can evaluate the trade-offs before proceeding (unless you're a react developer).
That's not a fog of war, that's delegation.
When an LLM generates your core logic and you can't explain why it works, that's a fundamentally different situation. You're not delegating — you're outsourcing the understanding, and that makes the result not yours.
The benefit of libraries is it's an abstraction and compartmentalization layer. You don't have to use REST calls to talk to AWS, you can use boto and move s3 files around in your code without cluttering it up.
Yeah, sometimes the abstraction breaks or fails, but generally that's rare unless the library really sucks, or you get a leftpad situation.
Having a mental map of your code doesn't mean you know everything, it means you understand how your code works, what it is responsible for, and how it interacts with or delegates to other things.
Part of being a good software engineer is managing complexity like that.
Case in point: aside from Tabbing furiously, I use the Ask feature to ask vague questions that would take my coworkers time they don't have.
Interestingly at least in Cursor, Intellisense seems to be dumbed down in favour of AI, so when I look at a commit, it typically has double digit percentage of "AI co-authorship", even though most of the time it's the result of using Tab and Intellisense would have given the same suggestion anyway.
This really bothers me, coming here asking for human feedback (basically: strangers spending time on their behalf) then dumping it into the slop generator pretending it is even slightly appreciated. It wouldn't even be that much more work to prompt the LLM to hide its tone (https://news.ycombinator.com/item?id=46393992#46396486) but even that is too much.
How many non-native English speakers are on HN? If it's more than 30%, why should they have to use a whole new language if they can just let an LLM do it in a natural sounding way.
Post both versions
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Some of it is "I wish things I think are cool got more upvotes". Fare enough, I've seen plenty of things I've found cool not get much attention. That's just the nature of the internet.
The other point is show and share HN stories growing in volume, which makes sense since it's now considerably easier to build things. I don't think that's a bad thing really, although curation makes it more difficult. Now that pure agentic coding has finally arrived IMO, creativity and what to build are significantly more important. They always were but technical ability was often rewarded much more heavily. I guess that sucks for technical people.
HN has a very different personality at weekends versus weekdays. I tend to find most of the stuff I think is cool or interesting gets attention at the weekends, and you'll see slightly more off the wall content and ideas being discussed, whereas the weekdays are notably more "serious business" in tone. Both, I think, have value.
So I wonder if there's maybe a strong element of picking your moment with Show HN posts in order to gain better visibility through the masses of other submissions.
Or maybe - but I think this goes against the culture a bit - Show HN could be its own category at the top. Or we could have particular days of the week/month where, perhaps by convention rather than enforcement, Show HN posts get more attention.
I'm not sure how workable these thoughts are but it's perhaps worth considering ways that Show HN could get a bit more of the spotlight without turning it into something that's endlessly gamed by purveyors of AI slop and other bottom-feeding content.
Chasing clout through these forums is ill advised. I think people should post, sure. But don't read into the response too much. People don't really care. From my experience, even if you get an insanely good response, it's short lived, people think its cool. For me it never resulted in any conversions or continued use. It's cheap to upvote. I found the only way to build interest in your product is organic, 1 on 1 communication, real engagement in user forums, etc.
The difference now is that there is even less correlation between "good readme" and "thoughtful project".
I think that if your goal is to signal credentials/effort in 2026 (which is not everyone's goal), a better approach is to write about your motivations and process rather than the artefact itself - tell a story.
I've launched multiple side projects through Show HN over the years. The ones that got traction weren't better products. They hit the front page during a slow news hour and got enough early upvotes to survive the ranking curve. The ones that flopped were arguably more interesting but landed during a busy cycle. That's not a Show HN problem, that's a single-ranking-pool problem.
What would actually help is a separate ranking pool for Show HN with slower time decay. Let projects sit visible for longer so the community can actually try them before they drop off. pg's original vision was about making things people want. Hard to evaluate that in a 90-minute window.
C'est la vie and que sera. I'm sure the artistic industry is feeling the same. Self expression is the computation of input stimuli, emotional or technical, and transforming that into some output. If an infallible AI can replace all human action, would we still theoretically exist if we're no longer observing our own unique universes?
Maybe if people did Show HN for projects that are useful for something? Or at least fun?
There's a disease on HN related with the latest fad:
- (now) "AI" projects
- (now) X but done with "AI"
- (now) X but vibecoded
- (less now, a lot more in the recent past) X but done in Rust
- (none now, quite a few in a more distant past) X but done with blockchain
If the main quality of the project is one of the above, why would it attract interest?
The thing in show HN has to do something to raise interest. If not even the author/marketer thinks it does something, why would anyone look at it?
Trane (good post): https://news.ycombinator.com/item?id=31980069
Pictures Are For Babies (lame post): https://news.ycombinator.com/item?id=45290805