Top
Best
New

Posted by danielfalbo 3 days ago

Reflections on AI at the End of 2025(antirez.com)
238 points | 358 commentspage 4
lolz404 3 days ago|
This article does little to support its claims but was a good primer to dive into some topics.

They are cool new tools use them where you can but there is a ton of research still left to do. Just lols at the hubris silicon valley will make something so smart it extincts humankind. It'll happen from the lack of water and heated planet first :)

The stocastic parrot argument is still debated but more nuanced than before. Although the original author still stands by the statement. Evidence of internal planning per model. Anthropic Attribution Graphs Research with some rhyming did support it but gemma didn't.

The idea of "understanding" is still up for debate as well. Sure, when models are directly trained on data there is representation. Othello-GPT Studies was one way to support but that was during training so some interal representation was created. Out of distribution task will collapse to confabulation. Apple's GSM-Symbolic Research seems to support that.

Chain of thought is a helpful tool but is untrustworthy at best. Anthropic themselves have showed this https://www.anthropic.com/research/reasoning-models-dont-say...

rldjbpin 2 days ago||
the reflections felt like a mixed bag between someone who seems to know about the technical aspects deeper than an average person, while simultaneously being like an astrologist.

personally, as someone building on top of gen AI for a living, i finally bit the bullet on building using LLMs. it did reduce friction in things i don't like doing and did not explore as much. by acting as a catalyst when i needed to finally address them, it helped me get going and eventually become proficient in the core tech itself.

outside of work, however, i find people around me use the services much more than i do. sometimes it felt like the "big data is like teenage sex"[1], but some aspects were quite genuine. got better appreciation after trying them to better understand other people's perspective and to design better.

with "slop" as word of the year and people wondering if a random clip is AI, now more than ever the effects in general life seems apparent. it is not as sexy as "i will lose my job soon", but the effects are here and now. while the next year will be even more interesting, i can't wait for the bubble to burst.

[1] https://hewlett.org/is-big-data-like-teenage-sex/

bgwalter 3 days ago||
Regarding the stochastic parrots:

It is easy to see that LLMs exclusively parrot by asking them about current political topics [1], because they cannot plagiarize settled history from Wikipedia and Britannica.

But of course there also is the equivalence between LLMs and Markov chains. As far as I can see, it does not rely on absurd equivalences like encoding all possible output states in an infinite Markov chain:

https://arxiv.org/abs/2410.02724

Then there is stochastic parrot research:

https://arxiv.org/abs/2502.08946

"The stochastic parrot phenomenon is present in LLMs, as they fail on our grid task but can describe and recognize the same concepts well in natural language."

As said above, this is obvious to anyone who has interacted with LLMs. Most researchers know what is expected of them if they want to get funding and will not research the obvious too deeply.

[1] They have Internet access of course.

bgwalter 3 days ago||
They are very advanced stochastic parrots that allow AI invested authors to suddenly write in perfect English.

If Antirez has never gotten an LLM to perform an absolutely embarrassing mistake, he must be very lucky or we should stop listening to him.

Programmers' resistance has not weakened. Since the ORCL drop of 40% anti-LLM opinions are censored and downvoted here. Many people have given up, and we always get articles from the same LLM influencers.

HellDunkel 3 days ago||
[flagged]
danielbln 3 days ago|
Must feel nice to let yourself be coddled by in-group/out-group thinking like that. "I've decided that AI is bad and useless, therefore anyone disagreeing must be an AI bro".
ctoth 3 days ago||
> The fundamental challenge in AI for the next 20 years is avoiding extinction.

So nice to see people who think about this seriously converge on this. Yes. Creating something smarter than you was always going to be a sketchy prospect.

All of the folks insisting it just couldn't happen or ... well, there have just been so many objections. The goalposts have walked from one side of the field to the other, and then left the stadium, went on a trip to Europe, got lost in a beautiful little village in Norway, and decided to move there.

All this time though, the prospect of instantiating a something smarter than you (and yes, it will be smarter than you even if it's at human level because of electronic speeds...) This whole idea is just cursed and we should not do the thing.

Mawr 3 days ago||
> Creating something smarter than you was always going to be a sketchy prospect.

Sure, but not so sure that this has any relevance to the topic at hand. You seem to be taking the assumption that LLMs can ever reach that level for granted.

It may be possible that all it takes is scaling up and at some point some threshold gets reached past which intelligence emerges. Maybe.

Personally, I'm more on board with the idea that since LLMs display approximately 0 intelligence right now, no amount of scaling will help and we need a fundamentally different approach if we want to create AGI.

cheschire 3 days ago||
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
feverzsj 3 days ago||
Seems they also want some AI money[0]. Guess, I'll keep using Valkey.

[0] https://redis.io/redis-for-ai/

danielfalbo 3 days ago||
> they

I'm not sure antirez is involved in any business decision making process at Redis Ltd.

He may not be part of "they".

antirez 3 days ago||
I'm not involved in business decisions and while I'm very AI positive I believe Redis as a company should focus on Redis fundamentals: so my piece has zero alignment on what I hope for the company.
sibellavia 3 days ago|||
In any case, what would be the problem? The page you mentioned simply illustrates how the product can be used in a specific domain; it doesn't seem forced to me.
bgwalter 3 days ago||
Conflict of interest and disclosure posts are frequently downvoted.
tptacek 3 days ago||
You mean flagged.

Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

https://news.ycombinator.com/newsguidelines.html

bgwalter 3 days ago||
Ah, so you just went through my history and downvoted everything in sight! Thanks for confirming.
tptacek 3 days ago||
I don't follow? I didn't flag you; you were remarking on a previous comment alleging shillage from 'antirez, and I'm pointing out that the behavior you say is "downvoted" is actually a black-letter guideline violation. People flag those posts.

Another one, though:

Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

bgwalter 3 days ago||
I can't help you if you repeatedly misinterpret me. Once you made the first response in this subthread, 4 or 5 of my comments went from 1 to 0 or -1. Cum hoc ergo propter hoc? Maybe.

I'll design a system for the senate that enables outside voters to first turn down the microphone's volume of a speaker if he says that another senator works for company X and then removes him from the floor. That'll be a great success for democracy and "intellectual curiosity", which is also in the guidelines.