Top
Best
New

Posted by baylearn 20 hours ago

Problems the AI industry is not addressing adequately(www.thealgorithmicbridge.com)
191 points | 208 comments
empiko 20 hours ago|
Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).
Lichtso 10 hours ago||
> Why bother developing chatbots

Maybe it is the reverse? It is not them offering a product, it is the users offering their interaction data. Data which might be harvested for further training of the real deal, which is not the product. Think about it: They (companies like OpenAI) have created a broad and diverse user base which without a second thought feeds them with up-to-date info about everything happening in the world, down to the individual life and even their inner thoughts. No one in the history of mankind ever had such a holistic view, almost gods eye. That is certainly something a super intelligence would be interested in. They may have achieved it already and we are seeing one of its strategies playing out. Not saying they have, but this observation would not be incompatible or indicate they haven't.

visarga 1 hour ago|||
It's not about achieving AGI as a final product, it's about building a perpetual learning machine fueled by real-time human interaction. I call it the human-AI experience flywheel.

People bring problems to the LLM, the LLM produces some text, people use it and later return to iterate. This iteration functions as a feedback for earlier responses from the LLM. If you judge an AI response by the next 20 rounds of interaction or more you can gauge if it was useful or not. They can create RLHF data this way, using hindsight or extra context from other related conversations of the same user on the same topic. That works because users try the LLM ideas in reality and bring outcome results back to the model, or they simply recall from their personal experience if that approach would work or not. The system isn't just built to be right; it's built to be correctable by the user base, at scale.

OpenAI has 500M users, if they generate 1000 tokens/user/day that means 0.5T interactive tokens/day. The chat logs dwarf the original training set in size and are very diverse, targeted to our interests, and mixed with feedback. They are also "on policy" for the LLM, meaning they contain corrections to mistakes the LLM made, not generic information like web scrape.

You're right that LLMs eventually might not even need to crawl the web, they have the whole society dump data into their open mouths. That did not happen with web search engines, only social networks did that in the past. But social networks are filled with our cultural wars and self conscious posing, while the chat room is an environment where we don't need to signal our group alignment.

Web scraping gives you humanity's external productions - what we chose to publish. But conversational logs capture our thinking process, our mistakes, our iterative refinements. Google learned what we wanted to find, but LLMs learn how we think through problems.

FuckButtons 36 minutes ago||
I see where you’re coming from, but I think teasing out something that looks like a clear objective function that generalizes to improved intelligence from llm interaction logs is going to be hellishly difficult. Consider, that most of the best llm pre training comes from being very very judicious with the training data, selecting the right corpus of llm interaction logs and then defining an objective function that correctly models…? Being helpful? From that sounds far harder than just working from scratch with rlhf.
blibble 9 hours ago||||
> No one in the history of mankind ever had such a holistic view, almost gods eye.

I distinctly remember search engines 30 years ago having a "live searches" page (with optional "include adult searches" mode)

kylecazar 5 hours ago|||
I'n the mid 90's? What did the "live searches" feature do?
sllabres 3 hours ago||
Show what queries are send to the search engine (by other users) right now
ysofunny 9 hours ago|||
that possibility makes me feel weird about paying a subscription... they should pay me!

or the best models should be free to use. if it's free to use then I think I can live with it

grafmax 8 hours ago|||
> it is supposed to usher the humanity into a new prosperous age (somehow).

More like usher in climate catastrophe way ahead of schedule. AI-driven data center build outs are a major source of new energy use, and this trend is only intensifying. Dangerously irresponsible marketing cloaks the impact of these companies on our future.

Redoubts 5 hours ago||
Incredibly bizarre take. You can build more capacity without frying the planet. Many ai companies are directly investing in nuclear plants for this reason, for example.
imiric 16 hours ago|||
Related to your point: if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?

This is the main point that proves to me that these companies are mostly selling us snake oil. Yes, there is a great deal of utility from even the current technology. It can detect patterns in data that no human could; that alone can be revolutionary in some fields. It can generate data that mimics anything humans have produced, and certain permutations of that can be insightful. It can produce fascinating images, audio, and video. Some of these capabilities raise safety concerns, particularly in the wrong hands, and important questions that society needs to address. These hurdles are surmountable, but they require focusing on the reality of what these tools can do, instead of on whatever a group of serial tech entrepreneurs looking for the next cashout opportunity tell us they can do.

The constant anthropomorphization of this technology is dishonest at best, and harmful and dangerous at worst.

xoralkindi 13 hours ago|||
> It can generate data that mimics anything humans have produced...

No, it can generate data that mimics anything humans have put on the WWW

nradov 9 hours ago||
The frontier model developers have licensed access to a huge volume of training data which isn't available on the public WWW.
ozim 14 hours ago||||
anthropomorphization definitely sucks, hype is over the board.

But it is far from snake oil as it actually is useful and does a lot of stuff really.

deadbabe 16 hours ago||||
Data from the future is tunneling into the past to mess up our weights and ensure we never achieve AGI.
richk449 14 hours ago|||
> if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?

Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.

As far as I can tell smart engineers are using AI tools, particularly people doing coding, but even non-coding roles.

The criticism feels about three years out of date.

imiric 13 hours ago|||
Not at all. The reason it's not talked about as much these days is because the prevailing way to work around it is by using "agents". I.e. by continuously prompting the LLM in a loop until it happens to generate the correct response. This brute force approach is hardly a solution, especially in fields that don't have a quick way of verifying the output. In programming, trying to compile the code can catch many (but definitely not all) issues. In other science and humanities fields this is just not possible, and verifying the output is much more labor intensive.

The other reason is because the primary focus of the last 3 years has been scaling the data and hardware up, with a bunch of (much needed) engineering around it. This has produced better results, but it can't sustain the AGI promises for much longer. The industry can only survive on shiny value added services and smoke and mirrors for so long.

majormajor 10 hours ago||
> In other science and humanities fields this is just not possible, and verifying the output is much more labor intensive.

Even just in industry, I think data functions at companies will have a dicey future.

I haven't seen many places where there's scientific peer review - or even software-engineering-level code-review - of findings from data science teams. If the data scientist team says "we should go after this demographic" and it sounds plausible, it usually gets implemented.

So if the ability to validate was already missing even pre-LLM, what hope is there for validation of the LLM-powered replacement. And so what hope is there of the person doing the non-LLM-version of keeping their job (at least until several quarters later when the strategy either proves itself out or doesn't.)

How many other departments are there where the same lack of rigor already exists? Marketing, sales, HR... yeesh.

natebc 12 hours ago||||
> Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.

Last week I had Claude and ChatGPT both tell me different non-existent options to migrate a virtual machine from vmware to hyperv.

Week before that one of them (don't remember which, honestly) gave me non existent options for fio.

Both of these are things that the first party documentation or man page has correct but i was being lazy and was trying to save time or be more efficient like these things are supposed to do for us. Not so much.

Hallucinations are still a problem.

nunez 10 hours ago||||
The few times I've used Google to search for something (Kagi is amazing!), it's Gemini Assistant at the top fabricated something insanely wrong.

A few days ago, I asked free ChatGPT to tell me the head brewer of a small brewery in Corpus Christi. It told me that the brewery didn't exist, which it did, because we were going there in a few minutes, but after re-prompting it, it gave me some phone number that it found in a business filing. (ChatGPT has been using web search for RAG for some time now.)

Hallucinations are still a massive problem IMO.

catlover76 10 hours ago||
[dead]
taormina 10 hours ago||||
ChatGPT constantly hallucinates. At least once per conversation I attempt to happen with it. We all gave up on bitching about it constantly because we would never talk about anything else, but I have no reason to believe that any LLM has vaguely solved this problem.
HexDecOctBin 7 hours ago||||
I just tried asking ChatGPT on how to "force PhotoSync to not upload images to a B2 bucket that are already uploaded previously", and all it could do is hallucinate options that don't exist and webpages that are irrelevant. This is with the latest model and all the reasoning and researching applied, and across multiple messages in multiple chats. So no, hallucination is still a huge problem.
majormajor 11 hours ago||||
> Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.

Nonsense, there is a TON of discussion around how the standard workflow is "have Cursor-or-whatever check the linter and try to run the tests and keep iterating until it gets it right" that is nothing but "work around hallucinations." Functions that don't exist. Lines that don't do what the code would've required them to do. Etc. And yet I still hit cases weekly-at-least, when trying to use these "agents" to do more complex things, where it talks itself into a circle and can't figure it out.

What are you trying to get these things to do, and how are you validating that there are no hallucinations? You hardly ever "hear about it" but ... do you see it? How deeply are you checking for it?

(It's also just old news - a new hallucination is less newsworthy now, we are all so used to it.)

Of course, the internet is full of people claiming that they are using the same tools I am but with multiple factors higher output. Yet I wonder... if this is the case, where is the acceleration in improvement in quality in any of the open source software I use daily? Or where are the new 10x-AI-agent-produced replacements? (Or the closed-source products, for that matter - but there it's harder to track the actual code.) Or is everyone who's doing less-technical, less-intricate work just getting themselves hyped into a tizzy about getting faster generation of basic boilerplate for languages they hadn't personally mastered before?

amlib 10 hours ago||||
How can it not be hallucinating anymore if everything the current crop of generative AI algorithm does IS hallucination? What actually happens is that sometimes the hallucinated output is "right", or more precisely, coherent with the user mental model.
kevinventullo 7 hours ago||||
You don’t hear about it anymore because it’s not worth talking about anymore. Everyone implicitly understands they are liable to make up nonsense.
leptons 13 hours ago|||
Are you hallucinating?? "AI" is still constantly hallucinating. It still writes pointless code that does nothing towards anything I need it to do, a lot more often than is acceptable.
pu_pe 19 hours ago|||
I don't think it's as simple as that. Chatbots can be used to harvest data, and sales are still important before and after you achieve AGI.
worldsayshi 16 hours ago|||
It could also be the case that they think that AGI could arrive at any moment but it's very uncertain when and only so many people can work on it simultaneously. So they spread out investments to also cover low uncertainty areas.
energy123 16 hours ago||||
Besides, there is Sutskever's SSI which is avoiding customers.
timy2shoes 12 hours ago||
Of course they are. Why would you want revenue? If you show revenue, people will ask 'HOW MUCH?' and it will never be enough. The company that was the 100xer, the 1000xer is suddenly the 2x dog. But if you have NO revenue, you can say you're pre-revenue! You're a potential pure play... It's not about how much you earn, it's about how much you're worth. And who is worth the most? Companies that lose money!
pests 15 hours ago|||
OpenAI considers money to be useless post-agi. They’ve even made statements that any investments are basically donations once agi is achieved
bluGill 15 hours ago|||
The people who make the money in gold rushes sold shovels, not mined the gold. Sure some random people found gold and made a lot of money, but many others didn't strike it rich.

As such even if there is a lot of money AI will make, it can still be the right decision to sell tools to others who will figure out how to use it. And of course if it turns out another pointless fad with no real value you still make money. (I'd predict the answer is in between - we are not going to get some AGI that takes over the world, but there will be niches where it is a big help and those niches will be worth selling tools into)

convolvatron 13 hours ago||
its so good that people seem to automatically exclude the middle. its either the arrival of the singularity or complete fakery. I think you've expressed the most likely outcome by far - that there will be some really interesting tools and use cases, and some things will be changed forever - but very unlikely that _everything_ will
rvz 19 hours ago|||
Exactly. For example, Microsoft was building data centers all over the world since "AGI" was "around the corner" according to them.

Now they are cancelling those plans. For them "AGI" was cancelled.

OpenAI claims to be closer and closer to "AGI" as more top scientists left or are getting poached by other labs that are behind.

So why would you leave if the promise of achieving "AGI" was going to produce "$100B dollars of profits" as per OpenAI's and Microsoft's definition in their deal?

Their actions tell more than any of their statements or claims.

cm277 19 hours ago|||
Yes, this. Microsoft has other businesses that can make a lot of money (regular Azure) and tons of cash flow. The fact that they are pulling back from the market leader (OpenAI) whom they mostly owned should be all the negative signal people need: AGI is not close and there is no real moat even for OpenAI.
whynotminot 17 hours ago||
Well, there’s clauses in their relationship with OpenAI that sever the relationship when AGI is reached. So it’s actually not in Microsoft’s interests for OpenAI to get there
PessimalDecimal 17 hours ago||
I haven't heard of this. Can you provide a reference? I'd love to see how they even define AGI crisply enough for a contract.
diggan 16 hours ago||
> I'd love to see how they even define AGI crisply enough for a contract.

Seems to be about this:

> As per the current terms, when OpenAI creates AGI - defined as a "highly autonomous system that outperforms humans at most economically valuable work" - Microsoft's access to such a technology would be void.

https://www.reuters.com/technology/openai-seeks-unlock-inves...

computerphage 16 hours ago||||
Wait, aren't they cancelling leases on non-ai data centers that aren't under Microsoft's control, while spending much more money to build new AI focused data centers that that own? Do you have a source that says they're canceling their own data centers?
PessimalDecimal 16 hours ago||
https://www.datacenterfrontier.com/hyperscale/article/552705... might fit the bill of what you are looking for.

Microsoft itself hasn't said they're doing this because of oversupply in infrastructure for it's AI offerings, but they very likely wouldn't say that publicly even if that's the reason.

computerphage 16 hours ago||
Thank you!
zaphirplane 19 hours ago||||
I’m not commenting on the whole just the rhetorical question of why would people leave.

They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI

Game_Ender 19 hours ago|||
I think the implicit take is that if your company hits AGI your equity package will do something like 10x-100x even if the company is already big. The only other way to do that is join a startup early enough to ride its growth wave.

Another way to say it is that people think it’s much more likely for each decent LLM startup grow really strongly first several years then plateau vs. then for their current established player to hit hyper growth because of AGI.

leoc 17 hours ago||
A catch here is that individual workers may have priorities which are altered due to the strong natural preference for assuring financial independence. Even if you were a hot AI researcher who felt (and this is just a hypothetical) that your company was the clear industry leader and had, say, a 75% chance of soon achieving something AGI-adjacent and enabling massive productivity gains, you might still (and quite reasonably) prefer to leave if that was what it took to make absolutely sure of getting of your private-income screw-you money (and/or private-investor seed capital). Again this is just a hypothetical: I have no special insight, and FWIW my gut instinct is that the job-hoppers are in fact mostly quite cynical about the near-term prospects for "AGI".
sdenton4 13 hours ago|||
Additionally, if you've already got vested stock in Company A from your time working there, jumping ship to Company B (with higher pay and a stock package) is actually a diversification. You can win whichever ship pulls in first.

The 'no one jumps ship if agi is close' assumption is really weak, and seemingly completely unsupported in TFA...

andrew_lettuce 13 hours ago|||
You're right, but the narrative out of these companies directly refutes this position. They're explicitly saying that 1. AGI changes everything, 2. It's just around the corner, 3. They're completely dedicated to achieving it; nothing is more important.

Then they leave for more money.

sdenton4 13 hours ago||
Don't conflate labor's perspective with capital's started position... The companies aren't leaving the companies, the workers are leaving the companies.
Touche 19 hours ago||||
Yeah I agree, this idea that people won't change jobs if they are on the verge of a breakthrough reads like a silicon valley fantasy where you can underpay people by selling them on vision or something. "Make ME rich, but we'll give you a footnote on the Wikipedia page"
LtWorf 15 hours ago||
I think you're being very optimistic with the footnote.
rvz 19 hours ago|||
> They are leaving for more money, more seniority or because they don’t like their boss. 0 about AGI

Of course, but that's part of my whole point.

Such statements and targets about how close we are to "AGI" has only become nothing but false promises and using AGI as the prime excuse to continue raising more money.

tuatoru 13 hours ago|||
> Their actions tell more than any of their statements or claims.

At Microsoft, "AI" is spelled "H1-B".

redhale 17 hours ago|||
> Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years?

To fund yourself while building AGI? To hedge risk that AGI takes longer? Not saying you're wrong, just saying that even if they did believe it, this behavior could be justified.

krainboltgreene 14 hours ago||
There is no chat bot so feature rich that it would fund the billions being burned on a monthly basis.
delusional 19 hours ago|||
Continuing in the same vain. Why would they force their super valuable, highly desirable, profit maximizing chat-bots down your throat?

Observations of reality is more consistent with company FOMO than with actual usefulness.

Touche 19 hours ago||
Because it's valuable training data. Like how having Google Maps on everyone's phone made their map data better.

Personally I think AGI is ill-defined and won't happen as a new model release. Instead the thing to look for is how LLMs are being used in AI research and there are some advances happening there.

richk449 14 hours ago||
> If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years?

What if chatbots and user interactions ARE the path to AGI? Two reasons they could be: (1) Reinforcement learning in AI has proven to be very powerful. Humans get to GI through learning too - they aren’t born with much intelligence. Interactions between AI and humans may be the fastest way to get to AGI. (2) The classic Silicon Valley startup model is to push to customers as soon as possible (MVP). You don’t develop the perfect solution in isolation, and then deploy it once it is polished. You get users to try it and give feedback as soon as you have something they can try.

I don’t have any special insight into AI or AGI, but I don’t think OpenAI selling useful and profitable products is proof that there won’t be AI.

A_D_E_P_T 20 hours ago||
> "This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI."

The central claim here is illogical.

The way I see it, if you believe that AGI is imminent, and if your personal efforts are not entirely crucial to bringing AGI about (just about all engineers are in this category), and if you believe that AGI will obviate most forms of computer-related work, your best move is to do whatever is most profitable in the near-term.

If you make $500k/year, and Meta is offering you $10M/year, then you ought to take the new job. Hoard money, true believer. Then, when AGI hits, you'll be in a better personal position.

Essentially, the author's core assumption is that working for a lower salary at a company that may develop AGI is preferable to working for a much higher salary at a company that may develop AGI. I don't see how that makes any sense.

levanten 19 hours ago||
Being part of the team that achieved AGI first would be to write your name in history forever. That could mean more to people than money.

Also 10m would be a drop in the bucket compared to being a shareholder of a company that has achieved AGI; you could also imagine the influence and fame that comes with it.

blululu 17 hours ago|||
Kind of a sucker move here since you personally will 100% be forgotten. We are only going to remember one or two people who did any of this. Say Sam Altman and Ilya Sttsveker. Everyone else will be forgotten. The authors or the Transformer paper are unlikely to make it into the history books or even popular imagination. Think about the Manhattan Project. We recently made a movie remembering that one guy who did something on the Manhattan Project, but he will soon fade back into obscurity. Sometimes people say that it was about Einstein's theory of relativity. The only people who know who folks like Ulam were are physicists. The legions of technicians who made it all come together are totally forgotten. Same with the space program or the first computer or pretty much any engineering marvel.
cdrini 16 hours ago|||
Well depends on what you value. Achieving/contributing to something impactful first is for many people valuable even if it doesn't come with fame. Historically, this mindframe has been popular especially amongst scientists.
impossiblefork 16 hours ago|||
Personally I think the ones who will be remembered will be the ones who publish useful methods first, not the ones who succeed commercially.

It'll be Vaswani and the others for the transformer, then maybe Zelikman and those on that paper for thought tokens, then maybe some of the RNN people and word embedding people will be cited as pioneers. Sutskever will definitely be remembered for GPT-1 though, being first to really scale up transformers. But it'll actually be like with flight and a whole mass of people will be remembered, just as we now remember everyone from the Wrights to Bleriot and to Busemann, Prandtl, even Whitcomb.

darth_aardvark 15 hours ago||
Is "we" the particular set of scientists who know those last four people? Surely you realize they're nowhere near as famous as the Wright brothers, right? This is giving strong https://xkcd.com/2501/ feelings.
impossiblefork 15 hours ago||
Yes, that is indeed the 'we', but I think more people are knowledgeable than is obvious.

I'm not an aerodynamicist, and I know about those guys, so they can't be infinitely obscure. I imagine every French person knows about Bleriot at least.

decimalenough 10 hours ago||
I'm an avgeek with a MSc in engineering. I vaguely recall the name Bleriot from physics, although I have no clue what he actually did. I have never even heard the names Busemann, Prandtl, or Whitcomb.
impossiblefork 9 hours ago||
I find this super surprising, because even I who don't do aerodynamics I still know about thes guys.

Bleriot was a french aviation pioneer and not a physicist. He built the first monoplane. Busemann was an aerodynamicist who invented wing sweep and also did important work on supersonic flight. Prandtl is known for research on lift distribution over wings, wingtip vortices, induced drag and he basically invented much of the theory about wings. Whitcomb gave his name to the Whitcomb area rule, although Otto Frenzl had come up with it earlier during WWII.

Scarblac 8 hours ago||
What is wing sweep, what is induced drag, what is the area rule?
skybrian 14 hours ago||||
"The grass is greener elsewhere" isn't inconsistent with a belief that AGI will happen somewhere.

It means you don't have much faith that the company you're working at will be the ones to pull it off.

fragmede 12 hours ago||
With a salary of $10m/year, handwave roughly half of that goes to taxes, you'd be making just shy of $100k post-tax per week. Call me a sellout, but goddamn. For that much money, there's a lot of places I could be convinced to put my faith into that I wouldn't otherwise.
skybrian 12 hours ago||
It might buy loyalty for a while, but after it accumulates, for many people it would be "why am I even working at all" money.

And if they don't like their boss and the other job sounds better, well...

raincole 16 hours ago||||
> Being part of the team that achieved AGI first would be to write your name in history forever. That could mean more to people than money.

Uh, sure. How many rocket engineers who worked for moon landing could you name?

krainboltgreene 14 hours ago||
How many new species of infinite chattel slave did they invent?
tharkun__ 19 hours ago|||
*some people
bombcar 19 hours ago||
>your best move is to do whatever is most profitable in the near-term

Unless you’re a significant shareholder, that’s almost always the best move, anyway. Companies have no loyalty to you and you need to watch out for yourself and why you’re living.

archeantus 15 hours ago||
I read that most of the crazy comp Zuck is offering is in stock. So in a way, going to the place where they have lots of stock reflects their belief about where AGI is going to happen first.
bombcar 14 hours ago|||
Comp is comp, no matter how it comes (though the details can vary in important ways).

I know people who've taking quite good comp from startups to do things that would require fundamental laws of physics to be invalidated; they took the money and devised experiments that would show the law to be wrong.

fragmede 12 hours ago||||
Facebook is already public, so they can sell the day it vests and get it in cold hard cash in their bank account. If Facebook weren't public it would be a more interesting point as they couldn't liquidate immediately, but they can, so I wouldn't read anything into that.
LtWorf 12 hours ago|||
But maybe the salary is also higher?
bsenftner 20 hours ago||
Also, AGI is not just around the corner. We need artificial comprehension for that, and we don't even have a theory how comprehension works. Comprehension is the fusing of separate elements into new functional wholes, dynamically abstracting observations, evaluating them for plausibility, and reconstituting the whole - and all instantaneously, for security purposes, of every sense constantly. We have no technology that approaches that.
Workaccount2 16 hours ago||
We only have two computational tools to work with - deterministic and random behavior. So whatever comprehension/understanding/original thought/consciousness is, it's some algorithmic combination of deterministic and random inputs/outputs.

I know that sounds broad or obvious, but people seem to easily and unknowingly wander into "Human intelligence is magically transcendent".

omnicognate 13 hours ago|||
What you state is called the Physical Church-Turing Thesis, and it's neither obvious nor necessarily true.

I don't know if you're making it, but the simplest mistake would be to think that you can prove that a computer can evaluate any mathematical function. If that were the case then "it's got to be doable with algorithms" would have a fairly strong basis. Anything the mind does that an algorithm can't would have to be so "magically transcendent" that it's beyond the scope of the mathematical concept of "function". However, this isn't the case. There are many mathematical functions that are proven to be impossible for any algorithm to implement. Look up uncomputable functions you're unfamiliar with this.

The second mistake would be to think that we have some proof that all physically realisable functions are computable by an algorithm. That's the Physical Church-Turing Thesis mentioned above, and as the name indicates it's a thesis, not a theorem. It is a statement about physical reality, so it could only ever be empirically supported, not some absolute mathematical truth.

It's a fascinating rabbit hole if you're interested - what we actually do and do not know for sure about the generality of algorithms.

RaftPeople 13 hours ago||||
> but people seem to easily and unknowingly wander into "Human intelligence is magically transcendent".

But the poster you responded to didn't say it's magically transcendent, they just pointed out that there are many significantly hard problems that we don't solutions for yet.

__loam 10 hours ago|||
We don't understand human intelligence enough to make any comparisons like this
tenthirtyam 19 hours ago|||
You'd need to define "comprehension" - it's a bit like the Chinese room / Turing test.

If an AI or AGI can look at a picture and see an apple, or (say) with an artificial nose smell an apple, or likewise feel or taste or hear* an apple, and at the same identify that it is an apple and maybe even suggest baking an apple pie, then what else is there to be comprehended?

Maybe humans are just the same - far far ahead of the state of the tech, but still just the same really.

*when someone bites into it :-)

For me, what AI is missing is genuine out-of-the-box revolutionary thinking. They're trained on existing material, so perhaps it's fundamentally impossible for AIs to think up a breakthrough in any field - barring circumstances where all the component parts of a breakthrough already exist and the AI is the first to connect the dots ("standing on the shoulders of giants" etc).

RugnirViking 17 hours ago|||
It's very very good at sounding like it understands stuff. Almost as good as actually understanding stuff in some fields, sure. But it's definitely not the same.

It will confidently analyze and describe a chess position using advanced sounding book techniques, but its all fundamentally flawed, often missing things that are extremely obvious (like, an undefended queen free to take) while trying to sound like its a seasoned expert - that is if it doesn't completely hallucinate moves that are not allowed by the rules of the game.

This is how it works in other fields I am able to analyse. It's very good at sounding like it knows what its doing, speaking at the level of a masters level student or higher, but its actual appraisal of problems is often wrong in a way very different to how humans make mistakes. Another great example is getting it to solve cryptic crosswords from back in the day. It often knows the answer already in its training set, but it hasn't seen anyone write out the reasoning for the answer, so if you ask it to explain, it makes nonsensical leaps (claiming birch rhymes with tyre level nonsense)

filleduchaos 15 hours ago|||
If anyone wants to see the chess comprehension breakdown in action, the YouTuber GothamChess occasionally puts out videos where he plays against a new or recently-updated LLM.

Hanging a queen is not evidence of a lack of intelligence - even the very best human grandmasters will occasionally do that. But in pretty much every single video, the LLM loses the plot entirely after barely a couple dozen moves and starts to resurrect already-captured pieces, move pieces to squares they can't get to, etc - all while keeping the same confident "expert" tone.

DiogenesKynikos 15 hours ago|||
A sufficiently good simulation of understanding is functionally equivalent to understanding.

At that point, the question of whether the model really does understand is pointless. We might as well argue about whether humans understand.

andrei_says_ 13 hours ago|||
In the Catch me if you Can movie, Leo diCaprio’s character wears a surgeon’s gown and confidently says “I concur”.

What I’m hearing here is that you are willing to get your surgery done by him and not by one of the real doctors - if he is capable of pronouncing enough doctor-sounding phrases.

bsenftner 10 hours ago|||
If that's what you're hearing, then you're not thinking it through. Of course one would not want an AI acting as a doctor as one's real doctor, but a medical or law school graduate studying for a license sure would appreciate a Socratic tutor in their specialization. Likewise, on the job in a technical specialization, a sounding board is of more value when it follows along, potentially with a virtual board of debate, and questions when logical drifts occur. It's not AI thinking for one, it is AI critically assisting their exploration through Socratic debate. Do not place AI in charge of critical decisions, but do place them in the assistance of people figuring out such situations.
amlib 9 hours ago||
The doctors analogy still applies, that "socratic tutor" LLM is actually a charlatan that sounds, to the untrained mind, like a competent person, but in actuality is a complete farce. I still wouldn't trust that.
DiogenesKynikos 2 hours ago|||
Leo diCaprio's character says nothing of substance in that scene. If you ask an LLM a question about most subjects, it will give you a highly intelligent, substantive answer.
vrighter 21 minutes ago||
it gives you an answer. Not a highly intelligent one. Just an answer. And if it doesn't know what it's talking about, it'll still give an answer.
timacles 11 hours ago||||
> A sufficiently good simulation of understanding is functionally equivalent to understanding.

This is just a thing to say that has no substantial meaning.

  - What is "sufficiently" mean? 
  - What is functionally equivalent? 
  - and what is even understanding?
All just vague hand waving

We're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.

> At that point, the question of whether the model really does understand is pointless.

You're right it is pointless, because you are suggesting something that doesnt exist. And the current models cannot understand

og_kalu 2 hours ago|||
>We're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.

Except it clearly does, in a lot of areas. You can't take a 'practical results trump all' stance and come out of it saying LLMs understand nothing. They understand a lot of things just fine.

DiogenesKynikos 2 hours ago|||
The current models obviously understand a lot. They would easily understand your comment, for example, and give an intelligent answer in response. The whole "the current models cannot understand" mantra is more religious than anything.
RugnirViking 11 hours ago|||
thats the point though, its not sufficient. Not even slightly. It constantly makes obvious mistakes, and cannot keep things coherent

I was almost going to explicitly mention your point but deleted it because I thought people would be able to understand.

This is not a philosophy/theology sitting around handwringing about "oh but would a sufficiently powerful LLM be able to dance on the head of a pin". We're talking about a thing, that actually exists, that you can actually test. In a whole lot of real-world scenarios that you try to throw at it, it fails in strange and unpredictable ways. Ways that it will swear up and down it did not do. It will lie to your face. It's convincing. But then it will lose in chess, it will fuck up running a vending machine buisness, it will get lost coding and reinvent the same functions over and over, it will make completely nonsensical answers to crossword puzzles.

This is not an intelligence that is unlimited, it is a deeply flawed two year old that just so happens to have read the entire output of human writing. It's a fundamentally different mind to ours, and makes different mistakes. It sounds convincing and yet fails, constantly. It will tell you a four step explanation of how its going to do something, then fail to execute four simple steps.

bsenftner 10 hours ago||
Which is exactly why is it insane that the industry is hell bent on creating autonomous automation through LLMs. Rube Goldberg machines is what will be created, and if civilization survives that insanity it will be looked back upon as one grand stupid era.
Touche 18 hours ago||||
They might not be capable of ingenuity, but they can spot patterns humans can miss. And that accelerates AI research, where it might help invent the next AI that helps invent the next AI that finally can think outside the box.
bsenftner 18 hours ago||||
I do define it, right up there in my OP. It's subtle, you missed it. Everybody misses it, because comprehension is like air, we swim in it constantly, to the degree the majority cannot even see it.
add-sub-mul-div 18 hours ago|||
Was that the intention of the Chinese room concept, to ask "what else is there to be comprehended?" after producing a translation?
andy99 16 hours ago|||
Another way to put it is we need Artificial Intelligence. Right now the term has been co-opted to mean prediction (and more commonly transcript generation). The stuff you're describing are what's commonly thought of as intelligence, it's too bad we need a new word for it.
bsenftner 10 hours ago||
No, we have the intelligence part, we know what to do when we have the answers. What we don't know is how to derive the answers without human intervention at all, not even our written knowledge. Artificial comprehension will not require anything beyond senses, observations through time, which builds a functional world model from observation and interaction, capable of navigating the world as a communicating participant. Note I'm not talking about agency, also called "will", which is separate from both intelligence and comprehension. Where intelligence is "knowing", comprehension is the derivation of knowing from observation and interaction alone, and agency is the entirely other ability to choose action over in action, to employ comprehension to affect the world, and for what purpose?
zxcb1 12 hours ago|||
Translation Between Modalities is All You Need

~2028

ekianjo 7 hours ago||
> We need artificial comprehension for that, and we don't even have a theory how comprehension works.

Not sure we need it. The counter example is the LLM itself. We had absolutely zero idea that the attention heads would bring such benefits down the road.

drillsteps5 9 hours ago||
I can't speak intelligently about how close AGI really is (I do not believe it is but I guess someone somehow somewhere might come up with a brilliant idea that nobody thought of so far and voila).

However I'm flabbergasted by the lack of attention to so-called "hallucinations" (which is a misleading, I mean marketing, term and we should be talking about errors or inaccuracies).

The problem is that we don't really know why LLMs work. I mean you can run the inference and apply the formula and get output from the given input, but you can't "explain" why LLM produced phase A as an output instead of B,C, or N. There's just too many parameters and computations to go though, and the very concept of "explaining" or "understanding" might not even apply here.

And if we can't understand how this thing works, we can't understand why it doesn't work properly (produces wrong output) and also don't know how to fix it.

And instead of talking about it and trying to find a solution everybody moved on to the agents which are basically LLMs that are empowered to perform complex actions IRL.

How does this makes any sense to anybody? I feel like I'm crazy or missing something important.

I get it, a lot of people are making a lot of money and a lot of promises are being made. But this is absolutely fundamental issue that is not that difficult to understand to anybody with a working brain, and yet I am really not seeing any attention paid to it whatsoever.

Bratmon 9 hours ago||
You can get use out of a hammer without understanding how the strong force works.

You can get use out of an LLM without understanding how every node works.

drillsteps5 8 hours ago|||
Hammer is not a perfect analogy because of how simple it is, but sure let's go with it.

Imagine that occasionally when getting in contact with the nail it shatters to bits, or goes through the nail as it were liquid, or blows up, or does something else completely unexpected. Wouldn't you want to fix it? And sure, it might require deep understanding of the nature of the materials and forces involved.

That's what I'd do.

potamic 2 hours ago|||
A better analogy might be something like medicine. There are many drugs prescribed that are known to help with certain conditions, but their mechanism of action is not known. While there may be research trying to uncover those mechanisms, that doesn't stop or slow down rolling out of the medicine for use. Research goes at its own pace, and very often cannot be sped up by throwing money at it, while the market dictates adoption. I see the same with LLMs. I'm sure this has attracted the attention of more researchers than anything else in this field, but I would expect any progress to be relatively slow.
m11a 7 hours ago|||
Use the human brain as an example then. We don't really know how it works. I mean, we know there's neurotransmitters and neural pathways etc (much like nodes in a transformer), but we don't know how exactly intelligence or our thinking process works.

We're also pretty good at working around human 'hallucinations' and other inaccuracies. Whether it be someone having a bad day, a brain fart, or individual clumsiness. eg in a (bad) organisation, sometimes we do it with layers of reviews and committees, much like layers of LLMs judging each other.

I think too much is attached to the notion of "we don't understand how the LLM works". We don't understand how any complicated intelligence works, and potentially won't for the forseeable future.

More generally, a lot of society is built up from empirical understanding of black box systems. I'd claim the field of physics is a prime example. And we've built reliable systems from unreliable components (see the field of distributed systems).

alganet 9 hours ago|||
You can get injured by using a hammer without understanding how it works.

You can damage a company by using a spreadsheet and not understanding how it works.

In your personal opinion, what are the things you should know before using an LLM?

Scarblac 8 hours ago|||
LLM hallucinations aren't errors.

LLMs generate text based on weights in a model, and some of it happens to be correct statements about the world. Doesn't mean the rest is generated incorrectly.

jvanderbot 8 hours ago||
You know the difference between verification and validation?

You're describing a lack of errors in verification (working as designed/built, equations correct).

GP is describing an error in validation (not doing what we want / require / expect).

dummydummy1234 9 hours ago||
I guess a counter, is that we don't need to understand how they work to produce a useful output.

They are a magical black box magic 8 ball, that more likely than not gives you the right answer. Maybe people can explain the black box, and make the magic 8 ball more accurate.

But at the end of the day, with a very complex system it will always be some level of black box unreliable magic 8 ball.

So the question then is how do you build an reliable system from unreliable components. Because llms directly are unreliable.

The answer to this is agents, ie feedback loops between multiple llm calls, which in isolation are unreliable, but in aggregate approach reliability.

At the end of the day the bet on agents is a bet that the model companies will not get a model that will magically be 100% correct on the first try.

drillsteps5 8 hours ago||
THAT. This is what I don't get. Instead of fixing a complex system let's build more complex system based on it knowing that it might not always work.

When you have a complex system that does not always work correctly, you start disassembling it to simpler and simpler components until you find the one - or maybe several - that are not working as designed, you fix whatever you found wrong with them, put the complex system together again, test it to make sure your fix worked, and you're done. That's how I debug complex cloud-based/microservices-infected software systems, that's how they test software/hardware systems found in aircraft/rockets and whatever else. That's such a fundamental principle to me.

If LLM is a black box by definition and there's no way to make it consistently work correctly, what is it good for?..

ekianjo 7 hours ago|||
> If LLM is a black box by definition and there's no way to make it consistently work correctly, what is it good for?..

many things are unpredictable on the real world. Most of the machines we make are built upon layers of redundancies to make imperfect systems stable and predictable. this is no different.

habinero 5 hours ago||
It is different. Most systems aren't designed to be a slot machine.
ekianjo 3 hours ago||
Yet RAG systems can perform quite well, so it's a definite proof that you can build something reliable most of the time out of something not reliable in the first place.
habinero 5 hours ago|||
Honestly? Spam and upselling executives on features that don't work. It's a pretty good autocomplete, too.
Animats 12 hours ago||
"A disturbing amount of effort goes into making AI tools engaging rather than useful or productive."

Right. It worked for social media monetization.

"... hallucinations ..."

The elephant in the room. Until that problem is solved. AI systems can't be trusted to do anything on their own. The solution the AI industry has settled on is to make hallucinations an externality, like pollution. They're fine as long as someone else pays for the mistakes.

LLMs have a similar problem to Level 2-3 self-driving cars. They sort of do the right thing, but a human has to be poised to quickly take over at all times. It took Waymo a decade to get over that hump and reach level 4, but they did it.

jasonsb 10 hours ago||
> The elephant in the room. Until that problem is solved. AI systems can't be trusted to do anything on their own.

AI system can be trusted to do most of the things on their own. You can't trust them for actions with irreversible consequences, but everything else is ok.

I can use them to write documents, code, create diagrams, designs etc. I just need to verify the result, but that's 10% of the actual work. I would say that 90% of modern day office work can be done with the help of AI.

daxfohl 5 hours ago||
And for a lot of things, we don't trust single humans to do it on their own either. It's just a matter of risk and tolerance. AI isn't really any different, except it's currently far less reliable than humans for many tasks. But for some tasks it's more reliable. And the gap could close for other tasks pretty quickly. Or not. But I don't think getting to zero hallucinations is a prereq for anything.
nunez 10 hours ago|||
Waymo "did it" in very controlled environments, not in general. They're still a ways away from solving self-driving in the general case.
Animats 10 hours ago|||
Los Angeles and San Francisco are not "very controlled environments".
__loam 10 hours ago|||
They've done over 70 million rider only miles on public roads.
cal85 11 hours ago||
When you say “do anything in their own”, what kind of things do you mean?
Animats 11 hours ago||
Take actions which have consequences.
coldcode 19 hours ago||
I never trusted them from the start. I remember the hype that came out of Sun when J2EE/EJBs appeared. Their hype documents said the future of programming was buying EJBs from vendors and wiring them together. AI is of course a much bigger hype machine with massive investments that need to be justified somehow. AI is a useful tool (sometimes) but not a revolution. ML is much more useful a tool. AGI is a pipe dream fantasy pushed to make it seem like AI will change everything, as if AI is like the discovery that making fire was.
ffsm8 16 hours ago|
I completely agree that LLMs are missing a fundamental part for AGI, which itself is a long way of from super intelligence.

However, you don't need either of these to completely decimate the job markets and by extension our societies.

Historically speaking, "good enough" and cheaper had always won over "better, but more expensive". I suspect LLMs will raise this question endlessly until significant portions of the society are struggling - and who knows what will happen then

Before LLMs started going anywhere, I thought that's gonna be an issue for later generations, but at this point I suspect we'll witness it within the next 10 yrs.

TrackerFF 13 hours ago||
My question is this - once you achieve AGI, what moat do you have, purely on the scientific part? Other than making the AGI even more intelligent.

I see a lot of talk that the first company that achieves AGI, will also achieve market dominance. All other players will crumble. But surely when someone achieves AGI, their competitors will in all likelihood be following closely after. And once those achieve AGI, academia will follow.

Point is, at some point AGI itself will become available the everyone. The only things that will be out of reach for most, is compute - and probably other expensive things on the infrastructure part.

Current AI funding seems to revolve around some sort of winner-take-all scenario. Just keep throwing incredible amounts of money at it, and hope that you've picked the winner. I'm just wondering what the outcome will be if this thesis turns out wrong.

imiric 12 hours ago||
> The only things that will be out of reach for most, is compute - and probably other expensive things on the infrastructure part.

That is the moat. That, and training data.

Even today, compute and data are the only things that matter. There is hardly any secret software sauce. This means that only large corporations with a practically infinite amount of resources to throw at the problem could potentially achieve AGI. Other corporations would soon follow, of course, but the landscape would be similar to what it is today.

This is all assuming that the current approaches can take us there, of which I'm highly skeptical. But if there's a breakthrough at some point, we would still see AI tightly controlled by large corporations that offer it as a (very expensive) service. Open source/weight alternatives would not be able to compete, just like they don't today. Inference would still require large amounts of compute only accessible to companies, at least for a few years. The technology would be truly accessible to everyone only once the required compute becomes a commodity, and we're far away from that.

If none of this comes to pass, I suspect there will be an industry-wide crash, and after a few years in the Trough of Disillusionment, the technology would re-emerge with practical applications that will benefit us in much more concrete and subtle ways. Oh, but it will ruin all our media and communication channels regardless, directly causing social unrest and political regression, that much is certain. (:

daxfohl 5 hours ago||
I think if any of this becomes possible, it won't happen. Seriously, if AGI was truly on the horizon at openai or elsewhere, the first thing they'd do is shut it down. Once it's AGI, they would have to realize that they can't control it any more than anyone else can, and facing the reality of that would stop them in their tracks.

In a way, all the hype can only indicate that AGI is still a distant illusion. If it were really around the corner we'd be hearing different stories.

fragmede 12 hours ago||
Same thing that happened to pets.com or webvan.com and the rest of the graveyard of failed companies. A bunch of investors lose money, a bunch of market consolidation, employees get dilluted to worthlessness, chapter 7, chapter 11. The free ride of today's equivalent of $1 Ubers will end. A glut of previously very expensive hardware for cheap on eBay (though I doubt this last point will happen since AGI is likely to be compute intensive).

It's not going to be fun or easy, but as far as the financials go, we were there in 2001.

The question is assuming we do get AGI, what the ramifications of that will be. Instead of hiring employees, a business can spin up employees (and down) like a tech company can spin up EC2 instances. Great for employers, terrible for employees.

That's a big "if" though.

computerphage 17 hours ago||
> This is purely an observation: You only jump ship in the middle of a conquest if either all ships are arriving at the same time (unlikely) or neither is arriving at all. This means that no AI lab is close to AGI. Their stated AGI timelines are “at the latest, in a few years,” but their revealed timelines are “it’ll happen at some indefinite time in the future.”

This makes no sense to me at all. Is it a war metaphor? A race? Why is there no reason to jump ship? Doesn't it make sense to try to get on the fastest ship? Doesn't it make sense to diversify your stock portfolio if you have doubts?

JunkDNA 16 hours ago||
I keep seeing this charge that AI companies have an “Uber problem” meaning the business is heavily subsidized by VC. Is there any analysis that has been done that explains how this breaks down (training vs inference and what current pricing is)? At least with Uber you had a cab fare as a benchmark. But what should, for example, ChatGPT actually cost me per month without the VC subsidy? How far off are we?
fragmede 12 hours ago||
It depends on how far behind you believe the model-available LLMs are. If I can buy, say, $10k worth of hardware and run a sufficiently equivalent LLM at home for the cost of that plus electricity, and amortize that over say 5 years to get $2k/yr plus electricity, and say you use it 40 hours a week for 50 weeks, for 2000 hours, gets you $1/hr plus electricity. That electrical cost will vary depending on location, but let's just handwave $1/hr (which should be high). So $2/hr vs ChatGPT's $0.11/hr if you pay $20/month and use it 174 hours per month.

Feel free to challenge these numbers, but it's a starting place. What's not accounted for is the cost of training (compute time, but also employee and everything else), which needs to be amortized over the length of time a model is used, so ChatGPT's costs rise significantly, but they do have the advantage that hardware is shared across multiple users.

nbardy 11 hours ago||
These estimates are way off. The concurrent requests are near free with the right serving infrastructure. The throughput per token per dollar is 1/100-1/1000 the price for a full saturated node.
cratermoon 16 hours ago||
https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...
JunkDNA 16 hours ago||
This article isn’t particularly helpful. It focuses on a ton of specific OpenAI business decisions that aren’t necessarily generalizable to the rest of the industry. OpenAI itself might be out over its skis, but what I’m asking about is the meta-accusation that AI in general is heavily subsidized. When the music stops, what does the price of AI look like? The going rate for chat bots like ChatGPT is $20/month. Does that go to $40 a month? $400? $4,000?
handfuloflight 14 hours ago|||
How much would OpenAI be burning per month if each monthly active user cost them $40? $400? $4000?

The numbers would bankrupt them within weeks.

cratermoon 11 hours ago|||
OK, how about another article that mentions the other big players, including Anthropic, Microsoft, and Google. https://www.wheresyoured.at/reality-check/
DavidPiper 6 hours ago|
AI is the new politics.

It's surprising to me the number of people I consider smart and deep original thinkers who are now parroting lines and ideas (almost word-for-word) from folks like Andrej Karpathy and Sam Altman, etc.

But, of course, "Show me the incentive and I will show you the outcome" never stops being relevant.

More comments...