Top
Best
New

Posted by yuedongze 12/8/2025

AI should only run as fast as we can catch up(higashi.blog)
198 points | 181 commentspage 4
Sabr0 7 days ago|
Ai now is becoming hard to keep up with. We gotta make sure to integrate in our daily lives to not fall behind. I literally began to make it a source of income. Make sure to do the same.
CGMthrowaway 12/8/2025||
> AI should only run as fast as we can catch up

Good principle. This is exactly why we research vaccines and bioweapons side by side in the labs, for example.

rogerkirkness 12/8/2025||
Appealing, but this is coming from someone smart/thoughtful. No offence to 'rest of world', but I think that most people have felt this way for years. And realistically in a year, there won't be any people who can keep up.
dontlikeyoueith 12/8/2025||
> And realistically in a year, there won't be any people who can keep up.

I've heard the same claim every year since GPT-3.

It's still just as irrational as it was then.

adventured 7 days ago||
You're rather dramatically demonstrating how remarkable the progress has been: GPT-3 was horrible at coding. Claude Opus 4.5 is good at it.

They're already far faster than anybody on HN could ever be. Whether it takes another five years or ten, in that span of time nobody on HN will be able to keep up with the top tier models. It's not irrational, it's guaranteed. The progress has been extraordinary and obvious, the direction is certain, the outcome is certain. All that is left is to debate whether it's a couple of years or closer to a decade.

umanwizard 7 days ago|||
Why is the outcome certain? We have no way of predicting how long models will continue getting better before they plateau.
adventured 7 days ago||
They continue to improve significantly year over year. There's no reason to think we're near a plateau in this specific regard.

The bottom 50% of software jobs in the US are worth somewhere around $200-$300 billion per year (salary + benefits + recruiting + training/education), one trillion dollars every five years minimum. That's the opportunity. It's beyond gigantic. They will keep pursuing the elimination of those jobs until it's done. It won't take long from where we're at now, it's a 3-10 year debate, rather than a 10-20 year debate. And that's just the bottom 50%, the next quarter group above that will also be eliminated over time.

$115k + $8-12k healthcare + stock + routine operating costs + training + recruitment. That's the ballpark median two years ago. Surveys vary, from BLS to industry, two to four million software developers, software engineers, so on and so forth. Now eliminate most of them.

Your AI coding agent circa 2030 will work 24/7. It has a superior context to human developers. It never becomes emotional or angry or crazy. It never complains about being tired. It never quits due to working conditions. It never unionizes. It never leaves work. It never gets cancer or heart disease. It's not obese, it doesn't have diabetes. It doesn't need work perks. It doesn't need time off for vacations. It doesn't need bathrooms. It doesn't need to fit in or socialize. It has no cultural match concerns. It doesn't have children. It doesn't have a mortgage. It doesn't hate its bosses. It doesn't need to commute. It gets better over time. It only exists to work. It is the ultimate coding monkey. Goodbye human.

throw234234234 7 days ago|||
Amazing how much investment has mostly gone to eliminate one job category; ironically what was meant to be the job of the future "learn to code". To be honest on current trajectory I'm always amazed how many SWE's think it is "enabling" or will be anything else other than this in the long term. I personally don't recommend anyone into this field anymore, especially when big money sees this as the next disruption to invest in and has bet in the opposite direction investment/market wise. Amazing what was just a chatbot 3 years ago will do to a large amount of people w.r.t unemployment and potential poverty; didn't appreciate it at the time.

Life/fate does have a sense of irony it seems. I wouldn't be surprised if it is just the "creative" industries that die; and normal jobs that provide little value today still survive in some form - they weren't judged on value delivered and still existed after all.

korianders 7 days ago|||
>Your AI coding agent circa 2030 will work 24/7

Doing what? What would we need software for when we have sufficiently good AI? AI would become "The Final Software", just give it input data, tell it what of data transform you want and it will give you the output, no need for new software ever again.

dontlikeyoueith 7 days ago||||
And there's the same empty headed certainty, extrapolating a sigmoid into an exponential.
rogerkirkness 7 days ago||
I can tell you don't control any resources relating to AI from your contempt alone
dontlikeyoueith 6 days ago||
You're entitled to be wrong.
Arainach 7 days ago|||
People claimed GPT-3 was great at coding when it launched. Those who said otherwise were dismissed. That has continued to be the case in every generation.
stale2002 7 days ago|||
> People claimed GPT-3 was great at coding when it launched.

Ok and they were wrong, but now people are right that it is great at coding.

> That has continued to be the case in every generation.

If something gets better over time, it is definitionally true that it was bad for every case in the past until it becomes good. But then it is good.

Thats how that works. For everything. You are talking in tautologies while not understanding the implication of your arguments and how it applies to very general things like "A thing that improves over time".

esafak 7 days ago||||
Are you saying the current models are not good at coding? That is a strong claim.
Arainach 7 days ago||
For brand new projects? Perhaps. For working with existing projects in large code bases? Still not living up to the hype. Still sick of explaining to leadership that they're not magic and "agentic" isn't magic either. Still sick of everyone not realizing that if you made coding 300% faster (which AI hasn't) that doesn't help when coding is less than half the hours of my week. Still sick of the "productivity gains" being subsidized by burning out competent code reviewers calling bullshit on things that don't work or will cause problems down the road.
dwaltrip 7 days ago|||
A bit reductive.
airstrike 12/8/2025|||
> And realistically in a year, there won't be any people who can keep up.

Bold claim. They said the same thing at the start of this year.

adventured 7 days ago||
You're all arguing over how many single digit years it'll take at this point.

It doesn't matter if it takes another 12 or 36 months to make that claim true. It doesn't matter if it takes five years.

Is AI coming for most of the software jobs? Yes it is. It's moving very quickly, and nothing can stop it. The progress has been particularly exceptionally clear (early GPT to Gemini 3 / Opus 4.5 / Codex).

bdangubic 7 days ago||
> Is AI coming for most of the software jobs?

be cool to start with one before we move to most…

esafak 7 days ago||
https://news.ycombinator.com/item?id=46124063
yuedongze 12/8/2025||
im hoping this can introduce a framework to help people visualize the problem and figure out a way to close that gap. image generation is something every one can verify, but code generation is perhaps not. but if we can make verifying code as effortless as verifying images (not saying it's possible), then our productivity can enter the next level...
drlobster 12/8/2025||
I think you underestimating how good these image generators are at the moment.
yuedongze 12/8/2025||
oh i mean the other direction! checking if a generated image is "good" that no one will tell something is off and it look naturally, rather than checking if they are fake.
cons0le 7 days ago||
I directly asked gemini how to get world peace. It said the world should prioritize addressing climate change, inequality, and discrimination. Yeah - we're not gonna do any of that shit. So I don't know what the point of "superintelligent" AI is if we aren't going to even listen to it for the basic big picture stuff. Any sort of "utopia" that people imagine AI bringing is doomed to fail because we already can't cooperate without AI
ASalazarMX 7 days ago||
> I don't know what the point of "super intelligent" AI is if we aren't going to even listen to it

Because you asked the wrong question. The most likely question would be "How do I make a quadrillion dollars and humiliate my super rich peers?".

But realistically, it gave you an answer according to its capacity. A real super intelligent AI, and I mean oh-god-we-are-but-insects-in-its-shadow super intelligence, would give you a roadmap and blueprint, and it would take account for our deep-rooted human flaws, so no one reading it seriously could dismiss it as superficial. in fact, anyone world elite reading it would see it as a chance to humiliate their world elite peers and get all the glory for themselves.

You know how adults can fool little children to do what they don't want to? We would be the toddlers in that scenario. I hope this hypothetical AI has humans in high regard, because that would be the only thing saving us from ourselves.

catigula 7 days ago|||
Why would a "real super intelligent AI" be your servant in this scenario?

>I hope this hypothetical AI has humans in high regard

This is invented. This is a human concept, rooted in your evolutionary relationships with other humans.

It's not your fault, it's very difficult or impossible to escape the simulation of human-ly modelling intelligence. You need only understand that all of your models are category errors.

ASalazarMX 7 days ago||
> Why would a "real super intelligent AI" be your servant in this scenario?

Why is the Bagger 288 a servant to miners, given the unimaginable difference in their strenght? Because engineers made it. Give humanity's wellbeing the highest weight on its training, and hope it carries over when they start training on their own.

catigula 7 days ago||
Category error. Intelligence is a different type of thing. It is not a boring technology.

>Give humanity's wellbeing the highest weight on its training

We don't even know how to do this relatively trivial thing. We only know how to roughly train for some signals that probably aren't correct.

This may surprise you but alignment is not merely unsolved; there are many people who think it's unsolvable.

Why do people eat artificially sweetened things? Why do people use birth control? Why do people watch pornography? Why do people do drugs? Why do people play video games? Why do people watch moving lights and pictures? These are all symptoms of humans being misaligned.

Natural selection would be very angry with us if it knew we didn't care about what it wanted.

ASalazarMX 7 days ago||
> Why do people eat artificially sweetened things? Why do people use birth control? Why do people watch pornography? Why do people do drugs? Why do people play video games? Why do people watch moving lights and pictures? These are all symptoms of humans being misaligned.

I think these behaviors are fully aligned with natural selection. Why do we overengineer our food? It's not for health, because simpler food would satisfy our nutritional needs as easily, it's because our far ancestors developed a taste for food that kept them alive longer. Our incredibly complex chain of meal preparation is just us looking to satisfy that desire for tasty food by overloading it as much as possible.

People prefer artificial sweeteners because they taste sweeter than regular ones, they use birth control because we inherently enjoy sex and want more of it (but not more raising babies), drugs are an overloading of our need for hapiness, etc. Our bodies crave for things, and uninformed, we give them what they want but multiplied several fold.

But geez, I agree, alignment of AI is a hard problem, but it would be wrong to say it's impossible, at least until it's understood better.

catigula 7 days ago||
It seems like you don’t understand reinforcement learning. The signal is reinforced because it correlates to behavior, hacking the signal itself is misalignment.
vkou 7 days ago|||
The blueprint should start with a recipe for building a better computer, and once you do that, well, it's humans starting fires and playing with the flames.
Nzen 7 days ago|||
Did you expect some answer that decried world peace as impossible ? It's just repeating what people say [0] when asked the same question. That's all that a large language model can do (other than putting it to rhyme or 'in the style of Charles Dickens').

[0] https://newint.org/features/2018/09/18/10-steps-world-peace

If you are looking for a vision of general AI that confirms a Hobbsian worldview, you might enjoy Lars Doucet's short story, _Four Magic Words_.

[1] https://www.fortressofdoors.com/four-magic-words/

potsandpans 7 days ago|||
I don't believe that this is going to happen, but the primary arguments revolving around a "super intelligent" ai involve removing the need for us to listen to it.

A super intelligent ai would have agency, and when incentives are not aligned would be adversarial.

In the caricature scenario, we'd ask, "super ai, how to achieve world peace?" It would answer the same way, but then solve it in a non-human centric approach: reducing humanities autonomy over the world.

Fixed: anthropogenic climate change resolved, inequality and discrimination reduced (by reducing population by 90%, and putting the rest in virtual reality)

ASalazarMX 7 days ago||
If out AIs achieve something like this, but they managed to give them the same values the minds in Iain Bank's Culture Series had, I think humanity would be golden.
chasd00 7 days ago|||
> So I don't know what the point of "superintelligent" AI is if we aren't going to even listen to it

I would kind of feel sorry for a super-intelligent AI having to deal with humans who have their fingers on on/off switch. It would be a very frustrating existence.

pessimizer 7 days ago|||
> Any sort of "utopia" that people imagine AI bringing is doomed to fail because we already can't cooperate without AI

It's just fanfiction. They're just making up stories in their heads based on blending sci-fi they've read or watched in the past. There's no theory of power, there's no understanding of history or even the present, it's just a bad Star Trek episode.

"Intelligence" itself isn't even a precise concept. The idea that a "superintelligent" AI is intrinsically going to be obsessed with juvenile power fantasies is just silly. An AI doesn't want to enslave the world, run dictatorial experiments born of childhood frustrations and get all the girls. It doesn't want anything. It's purposeless. Its intelligence won't even be recognized as intelligence if its suggestions aren't pleasing to the powerful. They'll keep tweaking it to keep it precisely as dumb as they themselves are.

PunchyHamster 7 days ago|||
I dunno, many people have that weird, unfounded trust in what AI says, more than in actual human experts it seems
bilbo0s 7 days ago||
Because AI, or rather, an LLM, is the consensus of many human experts as encoded in its embedding. So it is better, but only for those who are already expert in what they're asking.

The problem is, you have to know enough about the subject on which you're asking a question to land in the right place in the embedding. If you don't, you'll just get bunk. (I know it's popular to call AI bunk "hallucinations" these days, but really if it was being spouted by a half wit human we'd just call it "bunk".)

So you really have to be an expert in order to maximize your use of an LLM. And even then, you'll only be able to maximize your use of that LLM in the field in which your expertise lies.

A programmer, for instance, will likely never be able to ask a coherent enough question about economics or oncology for an LLM to give a reliable answer. Similarly, an oncologist will never be able to give a coherent enough software specification for an LLM to write an application for him or her.

That's the achilles heel of AI today as implemented by LLMs.

chasd00 7 days ago|||
> The problem is, you have to know enough about the subject on which you're asking a question to land in the right place in the embedding

The other day i was on a call with 3 or 4 other people solving a config problem in a specific system. One of them asked chatgpt for the solution and got back a list of configuration steps to follow. He started the steps but one of them mentioned configuring an option that did not exist in the system at all. Textbook hallucination. It was obvious on the call that he was very surprised that the AI would give him an incorrect result, he was 100% convinced the answer was what the LLM said and never once thought to question what the LLM returned.

I've had a couple of instances with friends being equally shocked when an LLM turned out to be wrong. One of which was fairly disturbing, I was at a horse track and describing LLMs and to demonstrate i took a picture of the racing form thing and asked the LLM to formulate a medium risk betting strategy. My friend immediatately took it as some kind of supernatural insight and bet $100 on the plan it came up with. It was as if he believed the LLM could tell the future.Thank god it didn't work and he lost about $70. Had he won I don't know what would have happened, he probably would have asked again and bet everything he had.

jackblemming 7 days ago|||
> is the consensus of many human experts as encoded in its embedding

That’s not true.

ASalazarMX 7 days ago||
Yup, current LLMs are trained on the best and the worst we can offer. I think there's value in training smaller models with strictly curated datasets, to guarantee they've learned from trustworthy sources.
chasd00 7 days ago||
> to guarantee they've learned from trustworthy sources.

i don't see how this will every work. Even in hard science there's debate over what content is trustworthy and what is not. Imagine trying to declare your source of training material on religion, philosophy, or politics "trustworthy".

ASalazarMX 7 days ago||
"Sir, I want an LLM to design architecture, not to debate philosophy."

But really, you leave the curation to real humans, institutions with ethical procedures already in place. I don't want Goole or Elon dictating what truth is, but I wouldn't mind if NASA or other aerospace institutions dictated what is truth in that space.

Of course, the dataset should have a list of every document/source used, so others can audit it. I know, unthinkable in this corporate world, but one can dream.

cranium 7 days ago||
"How to be in good health? Sleep, eat well, exercise." However, knowledge ≠ application.
kaluga 7 days ago|
[dead]