not AI’s highlights.
Easy with the hot take.
That’s what most non-tech-person’s year in LLMs looked like.
Hopefully 2026 will be the year where companies realize that implementing intrusive chatbots can’t make better ::waving hands:: ya know… UX or whatever.
For some reason, they think its helpful to distractingly pop up chat windows on their site because their customers need textual kindergarten handholding to … I don’t know… find the ideal pocket comb for their unique pocket/hair situation, or had an unlikely question about that aerosol pan release spray that a chatbot could actually answer. Well, my dog also thinks she’s helping me by attacking the vacuum when I’m trying to clean. Both ideas are equally valid.
And spending a bazillion dollars implementing it doesn’t mean your customers won’t hate it. And forcing your customers into pathways they hate because of your sunk costs mindset means it will never stop costing you more money than it makes.
I just hope companies start being honest with themselves about whether or not these things are good, bad, or absolutely abysmal for the customer experience and cut their losses when it makes sense.
do not acknowledge that everyone in the world thinks this shit is a complete and total garbage fire
Companies have been doing this "live support" nonsense far longer than LLMs have been popular.
I’m on LinkedIn Learning digging into something really technical and practical and it’s constantly pushing the chat fly out with useless pre-populated prompts like “what are the main takeaways from this video.” And they moved their main page search to a little icon on the title bar and sneakily now what used to be the obvious, primary central search field for years sends a prompt to their fucking chatbot.
That's the pure, uncut copium. Meanwhile, in the real world, search on major platforms is so slanted towards slop that people need to specify that they want actual human music:
https://old.reddit.com/r/MusicRecommendations/comments/1pq4f...
We want curious conversation here.
didip, timonoko, mark_I_watson, icapybara, _pdp_, agentifysh, sanreau,
There's no way to know if these are genuine thoughts or incentivized compelled speech.
nativeit has a good way of putting it.
Your replies to "anonnon" make me less than hopeful for the future of HN in regards to AI. Seems like this might be trending in the direction of Reddit, where the interests are basically all paid for and imposed rather than being genuine and organic, and dissent is aggressively shut out.
"Curious conversation" does not really apply when it is compelled via monetary interest without any consideration toward potentially serious side effects.
"At least when herding cats, you can be sure that if the cats are hungry, they will try to get where the food is." This part of the guy's comment is actually funny and apt. Somehow that escaped you when you wrote your threat reply. That makes me wonder how mind-controlled you are.
"yupyupyups" has a small summary of some of the negatives, yet is being flagged. "techpression" similarly does, though is a bit more negative in his remarks. Also being flagged.
So the whole thread reads like this: 1.) talking about benefits? bubble to the top 2.) criticize? Either threatened by Dang or flagged to the bottom
Sounds a whole lot like compelled speech to me. Sounds a whole lot like mind-control.
It's pretty sad to see really.
It might just be your rule system. I personally want to see criticism. I don't have the sensitivity you have toward personal attacks or what you "deem" personal attacks when it is text on-screen. I don't care. I want to see what useful information might come out of it. I think your policing just makes everything worse to be honest. The thread will just die out in a day anyway.
I think I have criticized it in the past and you or some other staff said that it's a slippery slope toward useless aggressive banter that derails topics, but I don't know. I really don't agree with it. That's just my life experience.
Reddit is kind of like this. And it's basically turned into imposed topics rather than organic topics with massive amounts of echo-chambering in each delusional sub-reddit. Anything remotely against the grain is harshly culled as soon as possible. You can only imagine what the back-end looks like for that kind of thing. Money being involved at many steps is guaranteed.
And yeah as another commenter pointed out, this one guy's blog being at the top of hacker news every time is potentially suspicious as well.
I think I originally came to this place more than Reddit 10+ years ago because yeah it felt like people just excited and curious about their tech topics and it didn't feel like it was being rampantly policed or pushing a political agenda etc. I guess I should just not participate in these threads because the topic is tired on me at this point.
Wait I just read your user page and this is actually hilarious:
"Conflict is essential to human life, whether between different aspects of oneself, between oneself and the environment, between different individuals or between different groups. It follows that the aim of healthy living is not the direct elimination of conflict, which is possible only by forcible suppression of one or other of its antagonistic components, but the toleration of it—the capacity to bear the tensions of doubt and of unsatisfied need and the willingness to hold judgement in suspense until finer and finer solutions can be discovered which integrate more and more the claims of both sides. It is the psychologist's job to make possible the acceptance of such an idea so that the richness of the varieties of experience, whether within the unit of the single personality or in the wider unit of the group, can come to expression."
Marion Milner, 'The Toleration of Conflict', Occupational Psychology, 17, 1, January 1943
This made me immediately and uncontrollably guffaw.
HN isn't a place for thinking people any more (a long time coming, but you could squint and pretend until recently). Happy new year and adios, thanks for the 100s of accounts dang. Double pinky swear I won't make another.
I want LLM astroturfers to have their reputations destroyed for pushing this idiocy on us
2024 was a lot of talk, a lot of "AI could hypothetically do this and that". 2025 was the year where it genuinely started to enter people's workflows. Not everything we've been told would happen has happened (I still make my own presentations and write my own emails) but coding agents certainly have!
Objectively 0->1 lots of backlog.
This is me touting for Emacs
Emacs was a great plus for me over the last year. The integration with various tooling with comint (REPL integration), compile (build or report tools), TUI (through eat or ansi-term), gave me a unified experience through the buffer paradigm of emacs. Using the same set of commands boosted my editing process and the easy addition of new commands make it easy to fit my development workflow to the editor.
This is how easy it is to write a non-vague "tool X helped me" and I'm not even an English native speaker.
If you don't trust me, I can't conclusively convince you that AI makes me more efficient, but if you want I'm happy to hop on a screen-share and elaborate in what ways it has boosted my workflow. I'm offering this because I'm also curious what your work looks like where AI cannot help at all.
E-mail address is on my profile!
Your example is very vague.
See if you can spot the problem in my review of Excel in your style:
"It's great and I like how it's formula paradigm gave me a unified experience. It's table features boosted my science workflows last year".
The dismissive tone is warranted.
That's how you know you're on the right track
These fuckers have their pants down, don't let them trick you out of leaving your mark.
Different strokes, but I’m getting so much more done and mostly enjoying it. Can’t wait to see what 2026 holds!
Anyone that believes that they are completely useless is just as deluded as anyone that believes they're going to bring an AGI utopia next week.
they were right
It’s also possible that people more experienced, knowledgable and skilled than you can see fundamental flaws in using LLMs for software engineering that you cannot. I am not including myself in that category.
I’m personally honestly undecided. I’ve been coding for over 30 years and know something like 25 languages. I’ve taught programming to postgrad level, and built prototype AI systems that foreshadowed LLMs, I’ve written everything from embedded systems to enterprise, web, mainframes, real time, physics simulation and research software. I would consider myself an 7/10 or 8/10 coder.
A lot of folks I know are better coders. To put my experience into context: one guy in my year at uni wrote one of the world’s most famous crypto systems; another wrote large portions of some of the most successful games of the last few decades. So I’ve grown up surrounded by geniuses, basically, and whilst I’ve been lectured by true greats I’m humble enough to recognise I don’t bleed code like they do. I’m just a dabbler. But it irks me that a lot of folks using AI profess it’s the future but don’t really know anything about coding compared to these folks. Not to be a Luddite - they are the first people to adopt new languages and techniques, but they also are super sceptical about anything that smells remotely like bullshit.
One of the most wise insights in coding is the aphorism“beware the enthusiasm of the recently converted.” And I see that so much with AI. I’ve seen it with compilers, with IDEs, paradigms, and languages.
I’ve been experimenting a lot with AI, and I’ve found it fantastic for comprehending poor code written by others. I’ve also found it great for bouncing ideas. And the code it writes, beyond boiler plate, is hot garbage. It doesn’t properly reason, it can’t design architecture, it can’t write code that is comprehensible to other programmers, and treating it as a “black box to be manipulated by AI” just leads to dead ends that can’t be escaped, terrible decisions that will take huge amounts of expert coding time to undo, subtle bugs that AI can’t fix and are super hard to spot, and often you can’t understand their code enough to fix them, and security nightmares.
Testing is insufficient for good code. Humans write code in a way that is designed for general correctness. AI does not, at least not yet.
I do think these problems can be solved. I think we probably need automated reasoning systems, or else vastly improved LLMs that border on automated reasoning much like humans do. Could be a year. Could be a decade. But right now these tools don’t work well. Great for vibe coding, prototyping, analysis, review, bouncing ideas.
What are some of the models you've been working with?
Here is the changelog for OpenBSD 7.8:
https://www.openbsd.org/78.html
There's nothing here that says: We make it easier to use it more of it. It's about using it better and fixing underlying problems.
Mistakes and hallucinations matter a whole lot less if a reasoning LLM can try the code, see that it doesn't work and fix the problem.
Does it? It's all prompt manipulation. Shell script are powerful yes, but not really huge improvement over having a shell (REPL interface) to the system. And even then a lot of programs just use syscalls or wrapper libraries.
> can try the code, see that it doesn't work and fix the problem.
Can you really say that does happens reliably?
If you mean 100% correct all of the time then no.
If you mean correct often enough that you can expect it to be a productive assistant that helps solve all sorts of problems faster than you could solve them without it, and which makes mistakes infrequently enough that you waste less time fixing them than you would doing everything by yourself then yes, it's plenty reliable enough now.
Its very difficult to argue the point that claude code:
1) was a paradigm shift in terms of functionality, despite, to be fair, at best, incremental improvements in the underlying models.
2) The results are an order of magnitude, I estimate, better in terms of output.
I think its very fair to distill “AI progress 2025” to: you can get better results (up to a point; better than raw output anyway; scaling to multiple agents has not worked) without better models with clever tools and loops. (…and video/image slop infests everything :p).
My point is purely that, compared to 2024, the quality of the code produced by LLM inference agent systems is better.
To say that 2025 was a nothing burger is objectively incorrect.
Will it scale? Is it good enough to use professionally? Is this like self driving cars where the best they ever get is stuck with an odd shaped traffic cone? Is it actually more productive?
Who knows?
Im just saying… LLM coding in 2024 sucked. 2025 was a big year.
Invariably they've never used AI, or at most very rarely. (If they used AI beyond that, this would be admission that it was useful at some level).
Therefore it's reasonable to assume that you are in that boat. Now that might not be true in your case, who knows, but it's definitely true on average.
- fart out demos that you don't plan on maintaining, or want to use as a starting place
- generate first-draft unit tests/documentation
- generate boilerplate without too much functionality
- refactor in a very well covered codebase
It's very useful for all of the above! But it doesn't even replace a junior dev at my company in its current state. It's too agreeable, makes subtle mistakes that it can't permanently correct (GEMINI.md isn't a magic bullet, telling it to not do something does not guarantee that it won't do it again), and you as the developer submitting LLM-generated code for review need to review it closely before even putting it up (unless you feel like offloading this to your team) to the point that it's not that much faster than having written it yourself.
a personal attack would be eg calling him a DC.
all I did was point out the intellectual dishonesty of his argument. that's an attack on his intellectually dishonest argument, not his person.
by all means go ahead and ban me
Ditto for "I am very disappointed about your BULLSHIT" in the GP comment.
(For anyone else reading this thread: my comment originally just read "Got a good news story about that one?" - justatdotin posted this reply while I was editing the comment to add the extra text.)