Top
Best
New

Posted by danielfalbo 12/20/2025

Reflections on AI at the End of 2025(antirez.com)
243 points | 363 commentspage 2
jimmydoe 12/20/2025|
> * The fundamental challenge in AI for the next 20 years is avoiding extinction.

sorry, I say it's folding the laundry. with an aging population, that's the most, if not only, useful thing.

abricq 12/20/2025||
> * Programmers resistance to AI assisted programming has lowered considerably. Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway: now the return on the investment is acceptable for many more folks.

Could not agree more. I myself started 2025 being very skeptical, and finished it very convinced about the usefulness of LLMs for programming. I have also seen multiple colleagues and friends go through the same change of appreciation.

I noticed that for certain task, our productivity can be multiplied by 2 to 4. So hence comes my doubts: are we going to be too many developers / software engineers ? What will happen for the rests of us ?

I assume that other fields (other than software-related) should also benefits from the same productivity boosts. I wonder if our society is ready to accept that people should work less. I think the more likely continuation is that companies will either hire less, or fire more, instead of accepting to pay the same for less hours of human-work.

danielfalbo 12/20/2025||
> Are we going to be too many developers / software engineers ? What will happen for the rests of us?

I propose that we should raise the bar for the quality of software now.

throw1235435 12/21/2025|||
I don't think that will happen because it hasn't for other technological improvements. In the end people pay for "good enough" and that's that. If "good enough" is now cheaper to implement that's all they will do. I've seen it in other technologies. As an example due to more precise manufacturing many manufacturers have used it to cheapen things like cars, electronics, etc just to the point where it passes warranty mostly; in the old days they had to "overbuild" to get it to that point putting more quality into the product.

Quality is a risk mitigation strategy; if software is disposable just like cheap manufactured goods most people won't pay for it thinking they can just "build another one". What we don't realise is due to sheer cost of building software we've wanted quality because its too expensive to fix later; AI could change that.

Hoping we invest in quality, more software (which has a price inelastic curve mostly due to scale/high ROI) etc I'm starting to think is just false hope from people in the tech industry that want to be optimistic which generally is in our nature. Tech people understand very little about economics most of the time and how people outside tech (your customers) generally operate. My reflection is mostly I need to pivot out of software; it will be commoditized.

abricq 12/20/2025|||
Yes, certainly agree. A few days ago here there was this blog claiming how formal verification would become widely more used with AI. The author claiming that AI will help us with the difficulty barrier to write formal proofs.
throw1235435 12/21/2025|||
I'm not sure that it will scale to other fields other than coding and math. The approach with RLVR makes it more amenable to STEM fields in general and most jobs believe it or not aren't that. The level of open source software with good test suites effectively gave them all the training material they needed; most professions won't provide that knowing that they will be giving their moat away. LLM's to other fields from my understanding still exhibit the same hallucination rates if only mildly improved especially if there isn't public internet material in that field.

We have to accept in the end that coding/SWE is one of the most disrupted fields from this breed of AI. Disruption unfortunately probably means less jobs overall. The profession is on trend to disrupting and automating itself I think; plan accordingly. I've seen so many articles claiming its great we didn't learn to code now; that's what the AI's have done.

antihipocrat 12/20/2025||
I like to think of it as adding new lanes to a highway. More will be delivered until it all jams up again.
roughly 12/20/2025||
> A few well known AI scientists believe that what happened with Transformers can happen again, and better, following different paths, and started to create teams, companies to investigate alternatives to Transformers and models with explicit symbolic representations or world models.

I’m actually curious about this and would love pointers to the folks working in this area. My impression from working with LLMs is there’s definitely a “there” there with regards to intelligence - I find the work showing symbolic representation in the structure of the networks compelling - but the overall behavior of the model seems to lack a certain je ne sais quoi that makes me dubious that they can “cross the divide,” as it were. I’d love to hear from more people that, well, sais quoi, or at least have theories.

pton_xd 12/20/2025||
> For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots: probabilistic machines that would: 1. NOT have any representation about the meaning of the prompt. 2. NOT have any representation about what they were going to say. In 2025 finally almost everybody stopped saying so.

It's interesting that Terrence Tao just released his own blog post stating that they're best viewed as stochastic generators. True he's not an AI researcher, but it does sound like he's using AI frequently with some success.

"viewing the current generation of such tools primarily as a stochastic generator of sometimes clever - and often useful - thoughts and outputs may be a more productive perspective when trying to use them to solve difficult problems" [0].

[0] https://mathstodon.xyz/@tao/115722360006034040

jdub 12/20/2025||
I get the impression that folks who have a strong negative reaction to the phrase "stochastic parrot" tend to do so because they interpret it literally or analogously (revealed in their arguments against it), when it is most useful as a metaphor.

(And, in some cases, a desire to deny the people and perspectives from which the phrase originated.)

antirez 12/20/2025||
What happened recently is that all the serious AI researches that were in the stochastic parrot side changed point of view but, incredibly, people without a deep understanding on such matters, previously exposed to such arguments, are lagging behind and still repeat arguments that the people who popularized them would not repeat again.

Today there is no top AI scientist that will tell you LLMs are just stochastic parrots.

emp17344 12/20/2025|||
You seem to think the debate is settled, but that’s far from true. It’s oddly controlling to attempt to discredit any opposition to this viewpoint. There’s plenty of research supporting the stochastic view of these models, such as Apple’s “Illusion” papers. Tao is also a highly respected researcher, and has worked with these models at a very high level - his viewpoint has merit as well.
visarga 12/20/2025||||
The stochastic parrot framing makes some assumptions, one of them being that LLMs generate from minimal input prompts, like "tell me about Transformers" or "draw a cute dog". But when input provides substantial entropy or novelty, the output will not look like any training data. And longer sessions with multiple rounds of messages also deviate OOD. The model is doing work outside its training distribution.

It's like saying pianos are not creative because they don't make music. Well, yes, you have to play the keys to hear the music, and transformers are no exception. You need to put in your unique magic input to get something new and useful.

geraneum 12/20/2025|||
Now that you’re here, what do you mean by “scientific hints” in your first paragraph?
lowsong 12/20/2025||
I'm impressed that such a short post can be so categorically incorrect.

> For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots

> In 2025 finally almost everybody stopped saying so.

There is still no evidence that LLMs are anything beyond "stochastic parrots". There is no proof of any "understanding". This is seeing faces in clouds.

> I believe improvements to RL applied to LLMs will be the next big thing in AI.

With what proof or evidence? Gut feeling?

> Programmers resistance to AI assisted programming has lowered considerably.

Evidence is the opposite, most developers do not trust it. https://survey.stackoverflow.co/2025/ai#2-accuracy-of-ai-too...

> It is likely that AGI can be reached independently with many radically different architectures.

There continues to be no evidence beyond "hope" that AGI is even possible, yet alone that Transformer models are the path there.

> The fundamental challenge in AI for the next 20 years is avoiding extinction.

Again, nothing more than a gut feeling. Much like all the other AI hype posts this is nothing more than "well LLMs sure are impressive, people say they're not, but I think they're wrong and we will make a machine god any day now".

crystal_revenge 12/20/2025|
Strongly agree with this comment. Decoder-only LLMs (the ones we use) are literally Markov Chains, the only (and major) difference is a radically more sophisticated state representation. Maybe "stochastic parrot" is overly dismissive sounding, but it's not a fundamentally wrong understanding of LLMs.

The RL claims are also odd because, for starters, RLHF is not "reinforcement learning" based on any classical definition of RL (which almost always involve an online component). And further, you can chat with anyone who has kept up with the RL field, and quickly realize that this is also a technology that still hasn't quite delivered on the promises it's been making (despite being an incredibly interesting area of research). There's no reason to speculate that RL techniques will work with "agents" where they have failed to achieve wide spread success in similar domains.

I continue to be confused why smart, very technical people can't just talk about LLMs honestly. I personally think we'd have much more progress if we could have conversations like "Wow! The performance of a Markov Chain with proper state representation is incredible, let's understand this better..." rather than "AI is reasoning intelligently!"

I get why non-technical people get caught up in AI hype discussions, but for technical people that understand LLMs it seems counter productive. Even more surprising to me is that this hype has completely destroyed any serious discussions of the technology and how to use it. There's so much oppurtunity lost around practical uses of incorporating LLMs into software while people wait for agents to create mountains of slop.

akomtu 12/21/2025|||
> why smart, very technical people can't just talk about LLMs honestly

Because those smart people are usually low-rung employees while their bosses are often AI fanatics. Were they to express anti-AI views, they would be fired. Then this mentality slips into their thinking outside of work.

krackers 12/20/2025|||
>Decoder-only LLMs (the ones we use) are literally Markov Chains

Real-world computers (the ones we use) are literally finite state machines

crystal_revenge 12/20/2025||
Only if the computer you use does not have memory. Definitionally if you are writing and reading from memory, you are not using an FSM.
krackers 12/20/2025||
No, it can still be modeled as a finite state machine. Each state just encodes the configuration of your memory. I.e. if you have 8 bits of memory, your state space just encodes 2^8 states for each memory configuration.

Any real-world deterministic thing can be encoded as a FSM if you make your state space big enough, since it by definition there has only a finite number of states.

crystal_revenge 12/20/2025||
You could model a specific instance of using your computer this way, but you could not capture the fact that you can execute arbitrary programs with your PC represented as an FSM.

Your computer is strictly more computationally powerful than an FSM or PDA, even though you could represent particular states of your computer this way.

The fact that you can model an arbitrary CFG as an regular language with limited recursion depth does not mean there’s no meaningful distinction between regular languages and CFG.

krackers 12/20/2025||
> you can execute arbitrary programs with your PC represented as an FSM

You cannot execute arbitrary programs with your PC, your PC is limited in how much memory and storage it has access to.

>Your computer is strictly more computationally powerful

The abstract computer is, but _your_ computer is not.

>model an arbitrary CFG as an regular language with limited recursion depth does not mean there’s no meaningful distinction between regular languages and CFG

Yes this I agree. But going back to your argument, claiming that LLMs with a fixed context-window are basically markov chains so they can't do anything useful is reductio ad absurdum in the exact same way as claiming that real-world computers are finite state machines.

A more useful argument on the upper-bound of computational power would be along the lines of circuit complexity I think. But even this does not really matter. An LLM does not need to be turing complete even conceptually. When paired with tool-use, it suffices that the LLM can merely generate programs that are then fed into an interpreter. (And the grammar of turing-complete programming languages can be made simple enough, you can encode Brainfuck in a CFG). So even if an LLM could only ever produce programs with a CFG grammar, the combination of LLM + brainfuck executor would give turing completeness.

Edit: There was this recent HN article along those lines. https://news.ycombinator.com/item?id=46267862.

crystal_revenge 12/20/2025||
> so they can't do anything useful

I never claimed that. They demonstrate just how powerful Markov chains can be with sophisticated state representations. Obviously LLMs are useful, I have never claimed otherwise.

Additionally, it doesn’t require any logical leaps to understand decoder only LLMs as Markov Chains, they preserve the Markov Property and otherwise be have exactly like them. It’s worth noting that encoder-decoder LLMs do not preserve the Markov property and can not be considered Markov chains.

Edit: I saw that post and at the time was disappointed by how confused the author was about those topics and how they apply to the subject.

piker 12/20/2025||
> There are certain tasks, like improving a given program for speed, for instance, where in theory the model can continue to make progress with a very clear reward signal for a very long time.

Super skeptical of this claim. Yes, if I have some toy poorly optimized python example or maybe a sorting algorithm in ASM, but this won’t work in any non-trivial case. My intuition is that the LLM will spin its wheels at a local minimum the performance of which is overdetermined by millions of black-box optimizations in the interpreter or compiler signal from which is not fed back to the LLM.

NitpickLawyer 12/20/2025||
> but this won’t work in any non-trivial case

Earlier this year google shared that one of their projects (I think it was alphaevolve) found an optimisation in their stack that sped up their real world training runs by 1%. As we're talking about google here, we can be pretty sure it wasn't some trivial python trick that they missed. Anyhow, at ~100M$ / training run, that's a 1M$ save right there. Each and every time they run a training run!

And in the past month google also shared another "agentic" workflow where they had gemini2.5-fhash! (their previous gen "small" model) work autonomously on migrating codebases to support aarch64 architecture. There they found ~30% of the projects worked flawlessly end-to-end. Whatever costs they save from switching to ARM will translate in real-world $ saved (at google scale, those can add up quickly).

piker 12/20/2025|||
The second example has nothing to do with the first. I am optimistic that LLMs are great for translations with good testing frameworks.

“Optimize” in a vacuum is a tarpit for an LLM agent today, in my view. The Google case is interesting but 1% while significant at Google scale doesn’t move the needle much in terms of statistical significance. It would be more interesting to see the exact operation and the speed up achieved relative to the prior version. But it’s data contrary to my view for sure. The cynic also notes that Google is in the LLM hype game now, too.

NitpickLawyer 12/20/2025||
Why do you think it's not relevant to the "optimise in a loop" thing? The way I think of it, it's using LLMs "in a loop" to move something from arch A (that costs x$) to arch B (that costs y$), where y is cheaper than x. It's still an autonomous optimisation done by LLMs, no?
piker 12/20/2025||
Did the LLM suggest moving to the new architecture? If not that’s not what’s under discussion. That’s just following an order to translate.
NitpickLawyer 12/20/2025||
Ah, I see your point.
Jaxan 12/20/2025|||
> As we're talking about google here, we can be pretty sure it wasn't some trivial python trick that they missed.

Strong disagree on the reasoning here. Especially since google is big and have thousands of developers, there could be a lot of code and a lot of low hanging fruit.

NitpickLawyer 12/20/2025||
> By finding smarter ways to divide a large matrix multiplication operation into more manageable subproblems, it sped up this vital kernel in Gemini’s architecture by 23%, leading to a 1% reduction in Gemini's training time.

The message I replied to said "if I have some toy poorly optimized python example". I think it's safe to say that matmul & kernel optimisation is a bit beyond a small python example.

andy99 12/20/2025|||
There was a discussion the other day where someone asked Claude to improve a code base 200x https://news.ycombinator.com/item?id=46197930
exitb 12/20/2025||
That’s most definitely not the same thing, as „improving a codebase” is an open ended task with no reliable metrics the agent could work against.
dist-epoch 12/20/2025||
https://github.com/algorithmicsuperintelligence/openevolve
piker 12/20/2025||
https://chatgpt.com/backend-api/estuary/public_content/enc/e...
a_bonobo 12/20/2025||
>* For years, despite functional evidence and scientific hints accumulating, certain AI researchers continued to claim LLMs were stochastic parrots: probabilistic machines that would: 1. NOT have any representation about the meaning of the prompt. 2. NOT have any representation about what they were going to say. In 2025 finally almost everybody stopped saying so.

Man, Antirez and I walk in very different circles! I still feel like LLMs fall over backwards once you give them an 'unusual' or 'rare' task that isn't likely to be presented in the training data.

oersted 12/20/2025||
LLMs certainly struggle with tasks that require knowledge that is not provided to them (at significant enough volume/variance to retain it). But this is to be expected of any intelligent agent, it is certainly true of humans. It is not a good argument to support the claim that they are Chinese Rooms (unthinking imitators). Indeed, the whole point of the Chinese Room thought experiment was to consider if that distinction even mattered.

When it comes to of being able to do novel tasks on known knowledge, they seem to be quite good. One also needs to consider that problem-solving patterns are also a kind of (meta-)knowledge that needs to be taught, either through imitation/memorisation (Supervised Learning) or through practice (Reinforcement Learning). They can be logically derived from other techniques to an extent, just like new knowledge can be derived from known knowledge in general, and again LLMs seem to be pretty decent at this, but only to a point. Regardless, all of this is definitely true of humans too.

feverzsj 12/20/2025||
In most cases, LLMs has the knowledge(data). They just can't generalize them like human do. They can only reflect explicit things that are already there.
oersted 12/20/2025||
I don't think that's true. Consider that the "reasoning" behaviour trained with Reinforcement Learning in the last generation of "thinking" LLMs is trained on quite narrow datasets of olympiad math / programming problems and various science exams, since exact unambiguous answers are needed to have a good reward signal, and you want to exercise it on problems that require non-trivial logical derivation or calculation. Then this reasoning behaviour gets generalised very effectively to a myriad of contexts the user asks about that have nothing to do with that training data. That's just one recent example.

Generally, I use LLMs routinely on queries definitely no-one has written about. Are there similar texts out there that the LLM can put together and get the answer by analogy? Sure, to a degree, but at what point are we gonna start calling that intelligent? If that's not generalisation I'm not sure what is.

To what degree can you claim as a human that you are not just imitating knowledge patterns or problem-solving patterns, abstract or concrete, that you (or your ancestors) have seen before? Either via general observation or through intentional trial-and-error. It may be a conscious or unconscious process, many such patterns get backed into what we call intuition.

Are LLMs as good as humans at this? No, of course, sometimes they get close. But that's a question of degree, it's no argument to claim that they are somehow qualitatively lesser.

SCdF 12/25/2025||
Late to this, but my interpretation of the parent's point was eg: LLMs still often produce bad code, despite "reading" every book about programming ever written. Simplistically, they aren't taking the knowledge from those books, and applying them to the knowledge of the code they've scraped, they are just using the scraped output. You can then separately ask them about knowledge from those books, but then if you go back and get them to code again, they still won't follow the advice they just gave you.
jmfldn 12/20/2025|||
"In 2025 finally almost everybody stopped saying so."

I haven't.

dist-epoch 12/20/2025||
Some people are slower to understand things.
yeasku 12/23/2025|||
That is why they need artificial inteligence
jmfldn 12/20/2025|||
Well exactly ;)
barnabee 12/20/2025|||
I don’t think this is quite true.

I’ve seen them do fine on tasks that are clearly not in the training data, and it seems to me that they struggle when some particular type of task or solution or approach might be something they haven’t been exposed to, rather than the exact task.

In the context of the paragraph you quoted, that’s an important distinction.

It seems quite clear to me that they are getting at the meaning of the prompt and are able, at least somewhat, to generalise and connect aspects of their training to “plan” and output a meaningful response.

This certainly doesn’t seem all that deep (at times frustratingly shallow) and I can see how at first glance it might look like everything was just regurgitated training data, but my repeated experience (especially over the last ~6-9 months) is that there’s something more than that happening, which feels like whet Antirez was getting at.

Kiro 12/20/2025||
Give me an example of one of those rare or unusual tasks.
a_bonobo 12/21/2025|||
I work on a few HPC systems with unusual, kinda custom-rolled architectures. A whole bunch of Python and R packages fail to compile on these systems. There's no publicly accessible documentation for these HPC systems, nor for these custom architectures. ChatGPT and Claude so far have given me only wrong advice on how to get around these compilation errors and there's not much on Google for these errors, but HPC staff usually knew what to do.
recursive 12/20/2025|||
Set the font size of a simple field in openxml. Doesn't even seem that rare. It said to add a run inside and set the font there. Didn't do anything. I ended up reverse engineering the output out of ms word. This happened yesterday.
Joel_LeBlanc 12/30/2025||
It's fascinating to see how AI is reshaping the landscape for digital assets—buying websites or e-commerce stores has become more accessible than ever. When evaluating potential investments, I always stress the importance of thorough due diligence; I've found that using tools like DREA (Digital Real Estate Analyzer) can really streamline the process and provide valuable insights. It's all about understanding the numbers and the potential for growth, especially in such a dynamic environment. What specific metrics are you focusing on?
fleebee 12/20/2025||
> The fundamental challenge in AI for the next 20 years is avoiding extinction.

That's a weird thing to end on. Surely it's worth more than one sentence if you're serious about it? As it stands, it feels a bit like the fearmongering Big Tech CEOs use to drive up the AI stocks.

If AI is really that powerful and I should care about it, I'd rather hear about it without the scare tactics.

dist-epoch 12/20/2025||
Yeah, well known marketing trick that Big Companies do.

Oil companies: we are causing global warming with all this carbon emissions, are you scared yet? so buy our stock

Pharma companies: our drugs are unsafe, full of side effects, and kill a lot of people, are you scared yet? so buy our stock

Software companies: our software is full of bugs, will corrupt your files and make you lose money, are you scared yet? so buy our stock

Classic marketing tactics, very effective.

Recursing 12/20/2025|||
I think https://en.wikipedia.org/wiki/Existential_risk_from_artifici... has much better arguments than the LessWrong sources in other comments, and they weren't written by Big Tech CEOs.

Also "my product will kill you and everyone you care about" is not as great a marketing strategy as you seem to imply, and Big Tech CEOs are not talking about risks anymore. They currently say things like "we'll all be so rich that we won't need to work and we will have to find meaning without jobs"

tejohnso 12/20/2025|||
What makes it a scare tactic? There are other areas in which extinction is a serious concern and people don't behave as though it's all that scary or important. It's just a banal fact. And for all of the extinction threats, AI included, it's very easy to find plenty of deep dive commentary if you care.
grodriguez100 12/20/2025|||
I would say yes, everyone should care about it.

There is plenty of material on the topic. See for example https://ai-2027.com/ or https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

emp17344 12/20/2025|||
The fact that people here take AI 2027 seriously is embarrassing. The authors are already beginning to walk back these claims: https://x.com/eli_lifland/status/1992004724841906392?s=20
jowea 12/20/2025||||
And I thought the rest of the thread was anxiety-inducing. Thanks for the nightmares lol.
dkdcio 12/20/2025|||
fear mongering science fiction, you may as well cite Dune or Terminator
defrost 12/20/2025|||
There's arguably more dread and quiet constrained horror in With Folded Hands ... (1947)

  Despite the humanoids' benign appearance and mission, Underhill soon realizes that, in the name of their Prime Directive, the mechanicals have essentially taken over every aspect of human life.

  No humans may engage in any behavior that might endanger them, and every human action is carefully scrutinized. Suicide is prohibited. Humans who resist the Prime Directive are taken away and lobotomized, so that they may live happily under the direction of the humanoids. 
~ https://en.wikipedia.org/wiki/With_Folded_Hands_...
XorNot 12/20/2025||
This hardly disproves the point: no one is taking this topic seriously. They're just making up a hostile scenario from science fiction and declaring that's what'll happen.
lm28469 12/20/2025|||
Lesswrong looks like a forum full of terminally online neckbeards who discovered philosophy 48 hours ago, you can dismiss most of what you read there don't worry
timmytokyo 12/20/2025||
If only they had discovered philosophy. Instead they NIH their own philosophy, falling into the same ditches real philosophers climbed out of centuries ago.
VladimirGolovin 12/20/2025||
This has been well discussed before, for example in this book: https://ifanyonebuildsit.com/
rckt 12/20/2025|
> Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway

Here we go again. Statements with the single source in the head of the speaker. And it’s also not true. The llms still produce bad/irrelevant code at such rate that you can spend more time prompting than doing things yourself.

I’m tired of this overestimation of llms.

xiconfjs 12/20/2025||
My person experience: if I can find a solution on stackoverflow etc. the LLM will produce working and fundamentally correct code. If I can‘t find a already fullfilled solution on these sites, the LLM is hallucinating like crazy (newer existing functions/modules/plugins, protocol features which aren’t specified and even github-repos which never existed). So, as stated my many people online before: for low-hanging fruits LLM are totally viable solution.
danielbln 12/20/2025||
I don't remember the last time Claude Code hallucinated some library, as it will check the packages, verify with the linter, run a test import and so on.

Are you talking about punching something into some LLM web chat that's disconnected from your actual codebase and has tooling like web search disabled? If so, that's not really the state of the art of AI assisted coding, just so you know.

yeasku 12/23/2025||
6 months.
barnabee 12/20/2025|||
Even where they are not directly using LLMs to write the most critical or core code, nearly every skeptic I know has started using LLMs at very least to do things like write tests, build tools, write glue code, help to debug or refactor, etc.

Your statement suffers not only from also coming only from your brain, with no evidence that you've actually tried to learn to use these tools, but it also goes against the weight of evidence that I see both in my professional network and online.

rckt 12/20/2025|||
I just want people making statements like the author to be more specific how exactly the llms are being used. Otherwise they contribute to this belief that llms are a magical tool that can do anything.

I am aware of simple routine tasks that LLMs can do. This doesn’t change anything about what I said.

danielbln 12/20/2025|||
All you had to do is scroll down further and read the next couple of posts where the author is being more specific on how they used LLMs.

I swear, the so called critics need everything spoon fed.

Kiro 12/20/2025|||
Sorry, but we're way past that. It's you who need to provide examples of tasks it can't do.
AnimalMuppet 12/20/2025|||
You need to meet more skeptics. (Or maybe I do.) In my world, it's much more rare than you say.
iamflimflam1 12/20/2025|||
But you have just repeated what you are complaining about.
rckt 12/20/2025||
Do you want me to spend time to come with a quality response to a lazy statement? It’s like fighting with windmills. I’m fine with having my say the way I did.
bgwalter 12/20/2025|||
> Here we go again.

Indeed, he said the same as a reflection on 2024 models:

https://news.ycombinator.com/item?id=42561151

It is always the fault of the "luser" who is not using and paying for the latest model.

locknitpicker 12/20/2025||
> Here we go again. Statements with the single source in the head of the speaker. And it’s also not true.

You're making the same sort of baseless claim you are criticising the blogger for making. Spewing baseless claims hardly moves any discussion forward.

> The llms still produce bad/irrelevant code at such rate that you can spend more time promoting than doing things yourself.

If that is your personal experience then I regret to tell you that it is only the reflection of your own inability to work with LLMs and coding agents. Meanwhile, I personally manage to effectively use LLMs anywhere between small refactoring needs and large software architecture designs, including generating fully working MVPs in one-shot agent prompts. From this alone it's rather obvious who is making baseless statements that are more aligned with reality.

More comments...