Top
Best
New

Posted by cdrnsf 5 days ago

Let's talk about LLMs(www.b-list.org)
194 points | 186 commentspage 3
cpharsh410 5 days ago|
[flagged]
lacymorrow 5 days ago||
[dead]
AIorNot 5 days ago||
the problem with this article is that he is right of course, but only right now. There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight, yes we likely will always need a few experts

I'm reminded of this scene from the Matrix: https://www.youtube.com/watch?v=cD4nhYR-VRA where the older wise man discusses societies reliance on AI

"Nobody cares how it works, as long as it works"

We're done. I for one welcome our new AI Overlords, or more accurately still welcome the tech bro billionares who are pulling the strings

frizlab 5 days ago||
> There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight

There are, IMHO, fewer reasons to believe they will be able to do that rather than not, though.

CamperBob2 5 days ago||
LLMs became much better at both reviewing and writing code over the last 12-18 months. Did you?

The current state of the art is irrelevant. Only the first couple of time derivatives matter.

paulhebert 5 days ago||
> Did you?

I would say I got better at both of those over the last 12-18 months. Are your skills static?

CamperBob2 5 days ago|||
Compared to Claude or GPT 5.5? Yeah, my skills are static relative to the progress seen recently. So are yours, unless your grandpa was named von Neumann or Szilard.
eiekeww 5 days ago|||
My brain got better at thinking deeper when I stopped using llms.

Lmao why does it seem outlandish to other people? Perhaps they never thought too deeply in the first place to recognise it.

slopinthebag 5 days ago||
> There is no reason to believe that future AI platforms won't be able to review code themselves and manage some aspects of themselves with minimal human oversight

Really? That's like someone during an economic boom saying "The economy is the worst it'll ever be. There is no reason to expect things to not continue to improve".

pheaded_while9 4 days ago||
That simile breaks down because - unlike the state of the economy - the collective human capacity to understand, design, and produce these systems essentially only goes one way, barring the apocolyptic.
gizajob 5 days ago||
Actually can we not thanks.
keybored 5 days ago||
I have no stake in Fred Brooks. But No Silver Bullet seemed to be taken as gospel on this board. Sufficiently productivity-enhancing technology? Gimme a break man. Maybe you’ll get a 30% boost. Not a 10X boost.

Until recently. dramatic pause

And then AI happened.

taormina 5 days ago|
Great! So all of this 10x boosting is visible in which economic indicator?
slopinthebag 5 days ago||
Debt.
cadamsdotcom 5 days ago||
> If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.

The article goes on to assume there’s no 10x gain to be had but misses one big truth.

Needing to type the code is an enormous source of accidental difficulty (typing speed, typos, whether you can be arsed to put your hands on the keyboard today…) and it is gone thanks to coding agents.

stackghost 5 days ago|
Let's actually not talk about LLMs.

I honestly couldn't force myself to finish yet another blog post about how "we're not yet sure what impact LLMs will have on society" or whatever beleaguered point the author was attempting to make.

"Some random person's take on LLMs" was maybe interesting in 2024. Today it is not even remotely interesting.

There are a gazillion more interesting things happening today that ought to be of interest to the median HN reader. Can we talk about those instead?

jubilanti 5 days ago||
I'm confused. If you don't want to talk about LLMs then why didn't you just flag the post and move on? Submit something interesting, upvote and comment on interesting posts, instead of feeding the engagement on this thread.

It sounds like you actually do want to talk about how much you don't want other people to talk about LLMs.

stackghost 5 days ago|||
Oh, I definitely flagged the post also.
famouswaffles 5 days ago|||
You're not supposed to flag a post for something like that. Ideally you downvote and move on if you feel that strongly about it. Flagging is meant to be reserved for stuff that breaks the rules or guidelines.
WolfeReader 5 days ago||
Stories can't be downvoted.
mettamage 5 days ago||
I am an AI engineer and I honestly agree. Talking about LLMs feels like the new crypto, with some nuances (i.e. many innovative things being possible and done with LLMs whereas crypto innovations were… few and far between).
dijksterhuis 5 days ago|||
it’s felt like the new crypto to me for about 2-3 years now.

i was doing an ML Sec phd a year or two before all this hype took off. i took one of the OG transformer papers along to present at our official little phd reading group when the paper was only a few months old (the details of this might be a bit sketchy here, was years ago now).

now i want nothing to do with the field in any way shape or form. i’m just done.

edit -- i got incredibly angry after writing this comment. pure hatred and spite for all the charlatans and accompanying bullshit.

eiekeww 5 days ago||
Sadly investing is all about making money… you should be more pissed at the naive people who have contributed to the effort and in particular those who don’t care about truth, but about cash flow potential.
dijksterhuis 4 days ago||
everyone involved is responsible, just to different degrees.
keybored 5 days ago|||
Tedious LLM discourse isn’t aimed at AI engineers. It’s doomscrolling fodder for regular programmers.