Posted by adrianhon 1 day ago
shamefully have to admit that my monkey-brain smirked because of an accidental 67-meme in a serious article.
This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.
And we’re just talking about cognition - it completely ignores the automatic processes such as maintaining and regulating the body and it’s hormones, coordinating and maintaining muscles, visual/spacial processing taking in massive amounts of data at a very fine scale, and informing the body what to do with it - could go on.
One of the more annoying things about this conversation is you don’t even need to make this argument to make the point you’re trying to make, but people love doing it anyway. It needlessly reduces how amazing the human brain is to a bunch of catchy sci fi sounding idioms.
It can be simultaneously true that transformer based language models can be very smart and that the human brain is also very smart. It genuinely confuses me why people need to make it an either/or.
I think it's the hubris that I find most offensive in this argument: a guy knows one complex thing (programming) and suddenly thinks he can make claims about neuroscience.
In my experience, this is exactly how language models solve hard new problems, and largely how I solve them too. Propose a new idea, see if it works, iterate if not, keep going until it works.
Of course you can see how to solve a problem that you've seen before, like a visual puzzle about balanced parentheses. We're hyper specialized to visually identify asymmetries. LMs don't have eyes. Your mockery proves nothing.
I've wondered about this. Do we really know enough about what the human brain is doing to make a statement like this? I feel like if we did, we would be able to model it faithfully and OpenAI, etc. would not be doing what they're doing with LLMs.
What if human cognition turns out to be the biological equivalent of a really well-tuned prediction machine, and LLMs are just a more rudimentary and less-efficient version of this?
Theoretically a human could sit alone in a dark room, knowing nothing of mathematics and come up with numbers, arithmetic algebra, etc...
They don't need to read every math textbook, paper, and online discussion in existence.
In your example, would the human have ever had contact with other humans, or would it be placed in the room as a baby with no further input?
This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.
We are very different, and there are some high-profile people that don't even have an internal monologue or self-introspection abilities (one of the other symptoms is having an egg-shaped head)
I have a different theory.
Aside from a few exceptions like Blake Lemoine few people seem to really act as if they believe A.I. is doing the same thing the human mind is doing.
My theory is people are for some reason role-playing as people who believe human thought is equivalent to A.I. for undisclosed reasons they themselves may or may not understand. They do not actually believe their own arguments.
A parrot that writes better code and English prose than I do?
I would like to buy your parrot.
If they discover the cure to cancer, I don't care how they did it. "I don't trust anyone who claims they're superhumanly intelligent" doesn't follow from "all they do is <how they work>".
- it’s the worst it’ll ever be - big leaps happened the fast few months bro
Etc.
Personally I think llm’s can be very powerful in a narrow-band. But the more substance a thing involves, the more a human is needed to be involved.
It kind of does if how they work is nothing like genuine intelligence. You can (rightly) think AI is incredible and amazing and going to bring us amazing new medical technologies, without wrongly thinking its super amazing pattern recognition is the same thing as genuine intelligence. It should be worrying if people begin to believe the stochastic parrot is actually wise.
That's not a cure. Like yes, I'd care if the AI says it cures cancer while nuking Chicago. But that isn't what OP said.
Altman SAYS he does not recall the exchange. Not the same thing.
The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.
This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.
It's not like he had anything to do with the technical achievements, except convincing the engineers that they were doing something valuable, but the cat is out of the bag on that.
All the downsides without much upside...
Sergey Brin is trying to change that lately, but Altman still has a sizable head start.
The fact that some (usually toxic) individuals get there shows that the system is flawed.
The fact that those individuals feel like they can do anything other than shut up, stay low and silently enjoy the fact that they got waaaay too much money shows that the system is very flawed.
We shouldn't follow billionaires, we should redistribute their money.
I can see an argument when it comes to cashing out, but I’m not clear how that should work without creating really weird incentives. Some sort of special tax?
Well yeah. After some amount, you get 100% taxes. So that instead of having billionaires who compete against each other on how rich they are or on the first one to go contaminate the surface of Mars or simply on power, maybe we would end up with people trying to compete on something actually constructive :-). Who knows, maybe even philanthropy!
I'm not against higher taxation of the wealthy. I think inequality is a serious problem. The issue is what the wealth of these people isn't a big pile of cash they are wallowing in, it's ownership of the companies they build and operate. Is that what we want to take away? How, and what would we do with it?
I think it makes more sense to tax it as that power is converted into cash. I'm not clear how a wealth tax should work.
Yeah, that makes sense to me. And those are all good questions of course :-).
> So, who owns and runs the companies?
I guess ownership stays the same, we just need to prevent the companies from growing too big. Because the bigger they are, the more powerful their leaders get, for once (aside from all the problems coming from monopolies). But by taxing them, we prevent the people owning those companies from owning 15 yachts and going to space for breakfast :D.
> How do new companies get formed?
I don't know if that's what you mean, but I often hear "if you prevent those visionaries from becoming crazy rich, nobody will build anything, ever". And I disagree. A ton of people like to build stuff knowing they won't get rich. Usually those people have better incentives (it's hard to have a worse incentive than "becoming rich and powerful", right?).
Some people say "we need to pay so much for this CEO, because otherwise he will go somewhere else and we won't have a competent CEO". I think this is completely flawed. You will always find someone competent to be the CEO of a company with a reasonable salary. Maybe that person will not work 23h a day, maybe they won't harass their workers, sure. But will it be worse in the end? The current situation is that such tech companies are "part of the problem, not of the solution" (the problem being, currently, that we are failing to just survive on Earth).
I want skilled institutional investors who have a track records of making smart bets. I don't want a random person who happened to get lucky in business dictating investment policy for substantial parts of the economy. I want accountability for abuses and mismanagement.
I know China gets a bad rep, but their bird cage market economy seems a lot more stable and predictable than this wild west pyramid scheme stuff we do in the US. Maybe there are advantages for some people in our model, but I really dislike the part where we consistently reward amoral grifters.
100%. First, a company should not be that big. The whole point of antitrust was to avoid that. The US failed at that, for different reasons, and now end up with huge tech monopolies. And it's difficult to go back because they are so big now.
BTW I would recommend Cory Doctorow's book about those tech monopolies: "Enshittification: why everything suddenly got worse and what to do about it". He explains extremely well the antitrust policies and the problems that arise when you let your companies get too big. It's full of actual examples of tech we all know. He even has an audiobook, narrated by himself!
My point is that it all only benefits a few people. Those people used to call themselves "kings", appointed by god. Now they are tech oligarchs. If the people realised that it was bad to have kings, eventually maybe they will realise that it is bad to have oligarchs?