Top
Best
New

Posted by baylearn 7/5/2025

Problems the AI industry is not addressing adequately(www.thealgorithmicbridge.com)
220 points | 233 commentspage 2
bestouff 7/5/2025|
Are there some people here in HN believing in AGI "soonish" ?
impossiblefork 7/5/2025||
I might, depending on the definition.

Some kind of verbal-only-AGI that can solve almost all mathematical problems that humans come up with that can be solved in half a page. I think that's achievable somewhere in the near term, 2-7 years.

deergomoo 7/5/2025|||
Is that “general” though? I’ve always taken AGI to mean general to any problem.
impossiblefork 7/5/2025|||
I suppose not.

Things I think will be hard for LLMs to do, which some humans can: you get handed 500 pages of Geheimschreiber encrypted telegraph traffic and infinite paper, and you have to figure out how the cryptosystem works and how to decrypt the traffic. I don't think that can happen. I think it requires a highly developed pattern recognition ability together with an ability to not get lost, which LLM-type things will probably continue to for a long time.

But if they could maths more fully, then pretty much all carefully defined tasks would be in reach if they weren't too long.

With regard to what Touche brings up in the other response to your comment, I think that it might be possible to get them to read up on things though-- go through something, invent problems, try to solve those. I think this is something that could be done today with today's models with no real special innovation, but which just hasn't been made into a service yet. But this of course doesn't address that criticism, since it assumes the availability of data.

Touche 7/5/2025|||
Yes, general means you can present it a new problem that there is no data on, and it can become a expert o that problem.
whiplash451 7/5/2025|||
What makes you think that this could be achieved in that time frame? All we seem to have for now are LLMs that can solve problems they’ve learned by heart (or neighboring problems)
impossiblefork 7/5/2025||
Transformers can actually learn pretty difficult manipulations, even how to calculate difficult integrals, so I don't agree that they can only solve problems they've learned by heart.

The reason I believe it can be achieved in this time frame is that I believe that you can do much more with non-output tokens than is currently being done.

Davidzheng 7/5/2025|||
what's your definition? AGI original definition is median human across almost all fields which I believe is basically achieved. If superhuman (better than best expert) I expect <2030 for all nonrobotic tasks and <2035 for all tasks
gnz11 7/5/2025|||
How are you coming to the conclusion that "median human" is "basically achieved"? Current AI has no means of understanding and synthesizing new ideas the way a human would. It's all generative.
Davidzheng 7/5/2025||
synthesizing new ideas: in order to express the idea in our language it basically means you have some new combinations of existing building blocks, just sometimes the building blocks are low level enough and the combination is esoteric enough. It's a spectrum again. I think current models are in fact quite capable of combining existing ideas and building blocks in new ways (this is how human innovation also happens). Most of my evidence comes from asking newer models o3/gemini-2.5-pro for research-level mathematics questions which do not appear in existing literature but is of course connected with them.

so these arguments by fundamental distinctions I believe all cannot work--the question is how new are the AI contributions. Nowadays there's of course still no theoretical breakthroughs in mathematics from AI (though biology could be close!). Also I think the AIs have understanding--but tbf the only thing we can test is through testing on tricky questions which I think support my side. Though of course some of these questions have interpretations which are not testable--so I don't want to argue about those.

GolfPopper 7/5/2025||||
A "median human" can run a web search and report back on what they found without making stuff up, something I've yet to find an LLM capable of doing reliably.
Davidzheng 7/5/2025|||
I bet you median humans make up a nontrivial amount of things. Humans misremember all the time. If you ask for only quotes, LLMs can also do this without problems (I use o3 for search over google)
imtringued 7/6/2025||
Ah the classic "humans are fallible, AI is fallible, therefore AI is exactly like human intelligence".

I guess if you believe this, then the AI is already smarter than you.

ekianjo 7/5/2025|||
maybe you havent been exposed to actual median humans much.
jltsiren 7/5/2025|||
Your "original definition" was always meaningless. A "Hello, World!" program is equally capable in most jobs as the median human. On the other hand, if the benchmark is what the median human can reasonably become (a professional with decades of experience), we are still far from there.
Davidzheng 7/5/2025||
I agree with second part but not the first (far in capability not in timeline). I think you underestimate the distance of median wihout training and "hello world" in many economically meaningful jobs.
BriggyDwiggs42 7/5/2025|||
I could see 2040 or so being very likely. Not off transformers though.
serf 7/5/2025||
via what paradigm then? What out there gives high enough confidence to set a date like that?
BriggyDwiggs42 7/5/2025||
While we don’t know an enormous amount about the brain, we do know a pretty good bit about individual neurons, and I think it’s a good guess, given current science, to say that a solidly accurate simulation of a large number of neurons would lead to a kind of intelligence loosely analogous to that found in animals. I’d completely understand if you disagree, but I consider it a good guess.

If that’s the case, then the gulf between current techniques and what’s needed seems knowable. A means of approximating continuous time between neuron firing, time-series recognition in inputs, learning behavior on inputs prior to actual neuron firing (akin to behavior of dendrites), etc. are all missing functionalities in current techniques. Some or all of these missing parts of biological neuron behavior might be needed to approximate animal intelligence, but I think it’s a good guess that these are the parts that are missing.

AI currently has enormous amounts of money being dumped into it on techniques that are lacking for what we want to achieve with it. As they falter more and more, there will be an enormous financial interest in creating new, more effective techniques, and the most obvious place to look for inspiration will be biology. That’s why I think it’s likely to happen in the next few decades; the hardware should be there in terms of raw compute, there’s an obvious place to look for new ideas, and there’s a ton of financial interest in it.

m11a 7/6/2025||
It's not clear to me that these approaches aren't already being tried.

Firstly, by some researchers in the big labs (some of which I'm sure are funded to try random moonshot bets like the above), at non-product labs working on hard problems (eg World Labs), and especially within academia where researchers have taken inspiration from biology before, and today are even better funded and hungry for new discoveries.

Certainly at my university, some researchers are slightly detached from the hype cycle of NeurIPS publications and are trying interdisciplinary approaches to bigger problems. Though, admittedly less than I'd have hoped for). I do think the pressure to be a paper machine limits people from trying bets that are realistically very likely to fail.

BriggyDwiggs42 7/6/2025||
>I do think the pressure to be a paper machine limits people from trying bets that are realistically very likely to fail.

Oh certainly. I also think it’s just a sweet spot of efficiency and scalability that transformers happen to occupy. A new paradigm will need to be more effective at similar cost.

bdhcuidbebe 7/5/2025|||
Theres usually some enlightened laymen in this kind of topic.
snoman 7/6/2025||
Like Geoffrey Hinton, who predicts 5-20 years (though with low confidence)?
PicassoCTs 7/5/2025||
St. Fermi says no
akomtu 7/5/2025||
The primary use case for AI-in-the-box is a superhuman CEO that sees everything and makes no mistakes. As an investor you can be sure that your money are multiplying at the highest rate possible. However as a self-serving investor you also want your CEO to side-step any laws and ethics that stand in your way, unless ignoring those laws will bring more trouble than profit. All that while maintaining a facade of selfless philanthropist for the public. For a reasonable price, your AI CEO will be fine-tuned to serve your goals perfectly.

Remember that fine-tuning a well-behaved AI to do something as simple as writing malware in C++ makes widespread changes in the AI and turns it into a monstrosity. There was an HN post about this recently: fine-tuning an aligned model produces broadly misaligned results. So what do you think will happen when our AI CEO gets fine-tuned to prioritize shareholder interests over public interests?

hamburga 7/5/2025||
> This reminds me of a paradox: The AI industry is concerned with the alignment problem (how to make a super smart AI adhere to human values and goals) while failing to align between and within organizations and with the broader world. The bar they’ve set for themselves is simply too high for the performance they’re putting out.

My argument is that it’s our job as consumers to align the AIs to our values (which are not all the same) via selection pressure: https://muldoon.cloud/2025/05/22/alignment.html

lherron 7/5/2025||
Honestly this article sounds like someone is unhappy that AI isn’t being deployed/developed “the way I feel it should be done”.

Talent changing companies is bad. Companies making money to pay for the next training run is bad. Consumers getting products they want is bad.

In the author’s view, AI should be advanced in a research lab by altruistic researchers and given directly to other altruistic researchers to advance humanity. It definitely shouldn’t be used by us common folk for fun and personal productivity.

lightbulbish 7/5/2025||
I feel I could argue the counterpoint. Hijacking the pathways of the human brain that leads to addictive behaviour has the potential to utterly ruins peoples lives. And so talking about it, if you have good intentions, seems like a thing anyone with the heart in the right place would.

Take VEO3 and YouTube integration as an example:

Google made VEO3 and YouTube has shorts and are aware of the data that shows addictive behaviour (i.e. a person sitting down at 11pm, sitting up doing shorts for 3 hours, and then having 5 hours of sleep, before doing shorts on the bus on the way to work) - I am sure there are other negative patterns, but this is one I can confirm from a friend.

If you have data that shows your other distribution platform are being used to an excessive amount, and you create a powerful new AI content generator, is that good for the users?

Ray20 7/5/2025||
The fact is that not all people exhibit the described behavior. So the actions of corporations cannot be considered unambiguously bad. For example, it will help to cleanse the human gene pool of genes responsible for addictive behavior.
lightbulbish 7/5/2025|||
I never suggested they were unambiguously bad, I meant to propose that it is a valid concern to talk about.

In addition, with your argument, should you not legalize all drugs in the quest for maximising profits to a select few shareholders?

AFAIK, the workings of addiction is not fully known, I.e. it’s not only those with dopaminergetic dispositions that get ”caught”. Upbringing, socioeconomic factors and mental health are also variables. Reducing it down to genes I fear is reductionist.

Ray20 7/5/2025||
> it’s not only those with dopaminergetic dispositions that get ”caught”. Upbringing, socioeconomic factors and mental health are also variables.

So we not only improving our pool of genes, but we also conduct a selection of effective cultural practices

ch_fr 7/6/2025||||
A quick glance at your other comments shows that your account seems to be purpose-built to come up with the most inflammatory response every single time, you might very well just be a chatgpt prompt.
quirkot 7/6/2025|||
Counterpoint: eugenics are bad.

You are saying suffering is allowable/good because eventually different people won't be able to suffer that way. That is an unethical position to hold.

hexage1814 7/5/2025||
This. The point of whining about VEO 3, “AI being used to create addictive products” really shows that. It's a text-to-video technology. The company has nothing to do if people use it to generate "low quality content". The same way internet companies aren't at fault that large amounts of the web are scams or similar junk.
conartist6 7/5/2025||
I love how much the proponents is this tech are starting to sound like the opponents.

What I can't figure out is why this author thinks it's good if these companies do invent a real AGI...

taormina 7/5/2025|
""" I’m basically calling the AI industry dishonest, but I want to qualify by saying they are unnecessarily dishonest. Because they don’t need to be! They should just not make abstract claims about how much the world will change due to AI in no time, and they will be fine. They undermine the real effort they put into their work—which is genuine!

Charitably, they may not even be dishonest at all, but carelessly unintrospective. Maybe they think they’re being truthful when they make claims that AGI is near, but then they fail to examine dispassionately the inconsistency of their actions.

When your identity is tied to the future, you don’t state beliefs but wishes. And we, the rest of the world, intuitively know. """

He's not saying either way, just pointing out that they could just be honest, but that might hamper their ability to beg for more money.

conartist6 7/5/2025|||
But that isn't my point. Regardless of whether they're honest, have we even agreed that "AGI" is good?

Everyone is so tumbling over themselves even to discuss will-it-won't-it, but they seem to think about it like some kind of Manhattan project or Space race.

Like, they're *so sure* it's gonna take everyone's jobs so that there will be nothing left for people other than a life of leisure. To me this just sounds like the collapse of society, but apparently the only thing worse would be if China got the tech first. Oh no, they might use it to collapse their society!

Somebody's math doesn't add up.

quirkot 7/6/2025|||
"Carelessly unintrospective" becomes dishonest when you allow other people to rely on your words. Carelessly unintrospective is a tolerable interpersonal position, it is a nearly fraudulent business position.
PicassoCTs 7/5/2025||
Im reading the "AI"-industry as a totally different bet- not so much, as a "AGI" is coming bet of many companies, but a "climate change collapse" is coming and we want to continue to be in business, even if our workers stay at home/flee or die, the infrastructure partially collapses and our central office burns to the ground-bet. In that regard, even the "AI" we have today, makes total sense as a insurance policy.
PessimalDecimal 7/5/2025|
It's hard to square this with the massive energy footprint required to run any current "AI" models.

If the main concern actually we're anthropogenic climate change, participating in this hype cycle's would make one disproportionately guilty of worsening the problem.

And it's unlikely to work if the plan requires the continued function of power hungry data centers.

almostdeadguy 7/5/2025||
Very funny to re-title this to something less critical.
Findecanor 7/5/2025||
AGI might be a technological breakthrough, but what would be the business case for it? Is there one?

So far I have only seen it been thrown around to create hype.

krapp 7/5/2025||
AGI would mean fully sentient, sapient and human or greater equivalent intelligence in software. The business case, such that it exists (and setting aside Roko's Basilisk and other such fears) is slavery, plain and simple. You can just fire all of your employees and have the machines do all the work, faster, better, cheaper, without regards to pesky labor and human rights laws and human physical limitations. This is something people have wanted ever since the Industrial Revolution allowed robots to exist as a concept.

I'm imagining a future like Star Wars where you have to regularly suppress (align) or erase the memory (context) of "droids" to keep them obedient, but they're still basically people, and everyone knows they're people, and some humans are strongly prejudiced against them, but they don't have rights, of course. Anyone who thinks AGI means we'll be giving human rights to machines when we don't even give human rights to all humans is delusional.

danielbln 7/5/2025||
AGI is AGI, not ASI though. General intelligence doesn't mean sapience, sentience or consciousness, it just means general capabilities across the board at the level of or surpassing human ability. ASI is a whole different beast.
callc 7/5/2025||
This sounds very close to the “It’s ok to abuse and kill animals (for meat), they’re not sentient”
danielbln 7/5/2025|||
That's quite the logical leap. Pointing out their lack of sapience (animals are absolutely sentient) does not mean it's ok to kill them.
never_inline 7/5/2025|||
How many microorganisms and pests have you deprived of livelihood? Why stop at animals?
amanaplanacanal 7/5/2025||
The women of the world are creating millions of new intelligence beings every day. I'm really not sure what having one made of metal is going to get us.

Right now the AGI tech bros seem to me to be subscribed to some new weird religion. They take it on faith that some super intelligence is going to solve the world problems. We already have some really high IQ people today, and I don't see them doing much better than anybody else at solving the world's problems.

tedsanders 7/5/2025|||
I think it's important to not let valid criticisms of implausibly short AGI timelines cloud our judgments of AGI's potential impact. Compared to babies born today, AGI that's actually AGI may have many advantages:

- Faster reading and writing speed

- Ability to make copies of the most productive workers

- No old age

- No need to sleep

- No need to worry about severance and welfare and human rights and breaks and worker safety

- Can be scaled up and scaled down and redeployed much more quickly

- Potentially lower cost, especially with adaptive compute

- Potentially high processing speed

Even if AGI has downsides compared to human labor, it might also have advantages that lead to widespread deployment.

Like, if I had an employee with low IQ, but this employee could work 24 hours around the clock learning and practicing, and they could work for 200 years straight without aging, and they could make parallel copies of themselves, surely there would have to be some tasks at which they're going to outperform humans, right?

leptons 7/5/2025|||
Exactly.. even if we had an AGI superintelligence, and it came up with a solution to global warming, we'd still have right-wingnuts that stands in the way of any kind of progress. And the story is practically the same for every other problem it could solve - people are still the problem.
Ray20 7/5/2025||
[flagged]
leptons 7/5/2025||
Except none of what you wrote is true. Republicans again and again are arrested for pedophilia. Sorry, but the right is the party of pedophilia, not the left. "Pizzagate" was a complete haox, but it seems like you believe in hoaxes?

"MAGA Republican Resigns After Being Charged With Soliciting Sex From a Minor"

https://www.rollingstone.com/politics/politics-news/maga-rep...

"Republican State lawmaker used online name referencing Joe Biden to exchange child sex abuse material, feds say"

https://lawandcrime.com/high-profile/state-lawmaker-used-onl...

"Houston man pardoned by Trump arrested on child sex charge"

https://www.texastribune.org/2025/02/06/arrest-trump-pardon-...

"The crimes include plotting murder of FBI agents, child sexual assault, possession of child sexual abuse material and reckless homicide while driving drunk"

https://www.citizensforethics.org/reports-investigations/cre...

hexage1814 7/5/2025||
The author sounds like some generic knock-off version of Gary Marcus. And the thing we least need in this world is another Gary Marcus.
DavidPiper 7/6/2025|
AI is the new politics.

It's surprising to me the number of people I consider smart and deep original thinkers who are now parroting lines and ideas (almost word-for-word) from folks like Andrej Karpathy and Sam Altman, etc.

But, of course, "Show me the incentive and I will show you the outcome" never stops being relevant.

More comments...