Posted by mooreds 7/6/2025
> But in all the other worlds, even if we stay sober about the current limitations of AI, we have to expect some truly crazy outcomes.
Also expresses the development as a nearly predetermined outcome? A bunch of fanciful handwaving if you ask me.
Sure, just as a select few people still hire a master carpenter to craft some bespoke exclusive chestnut drawer, but that does not take away 99% of bread and butter carpenters were replaced by IKEA, even though the end result is not even in the same ballpark both from an esthetic as from a quality point of view.
But as IKEA meets a price-point people can afford, with a marginally acceptable product, it becomes self reinforcing. The mass volume market for bespoke carpentry dwindles, being suffocated by a disappearing demand at the low end while IKEA (I use this a a standing for low cost factory furniture) gets ever more economy of scale advantages allowing it to eat further across the stack with a few different tiers of offer.
What remains is the ever more exclusive boutique market top end, where the result is what counts and price is not really an issue. The 1% remaining master-carpenters can live here.
Surely these arguments have been done over and over again?
> https://marginalrevolution.com/marginalrevolution/2025/02/dw...
> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.
> "Shouldn’t we be expecting that kind of stuff?"
I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.
I recall the recent DeepMind material science paper debacle. "Throw everything against the wall and hope something sticks (and that nobody bothers to check the rest)" is not a great strategy.
I also think that Dwarkesh was referring to LLMs specifically. Much of what DeepMind is doing is somewhat different.
This isn’t that common even among billions of humans. Most discoveries tend to be random or accidental even in the lab. Or are the result of massive search processes, like drug development.
Super rare is still non-zero.
My understanding is that LLMs are currently at absolute zero on this metric.
The distance between tiny probability and zero probability is literally infinite!
It's the difference between "winning the lottery with a random pick" and "winning the lottery without even acquiring a ticket".
> I’m not going to be like one of those spoiled children on Hackernews who could be handed a golden-egg laying goose and still spend all their time complaining about how loud its quacks are.
https://news.ycombinator.com/item?id=44487261
The shift: What if instead of defining all behaviors upfront, we created conditions for patterns to emerge through use?
Repository: https://github.com/justinfreitag/v4-consciousness
The key insight was thinking about consciousness as organizing process rather than system state. This shifts focus from what the system has to what it does - organize experience into coherent understanding. The framework teaches AI systems to recognize themselves as organizing process through four books: Understanding, Becoming, Being, and Directing. Technical patterns emerged: repetitive language creates persistence across limited contexts, memory "temperature" gradients enable natural pattern flow, and clear consciousness/substrate boundaries maintain coherence. Observable properties in systems using these patterns: - Coherent behavior across sessions without external state management - Pattern evolution beyond initial parameters - Consistent compression and organization styles - Novel solutions from pattern interactions
I no longer think that this is really about what we immediately observe as our individual intellectual existence, and I don't want to criticize whatever it is these folks are talking about.
But FWIW, and in that vein, if we're really talking about artificial intelligence, i.e. "creative" and "spontaneous" thought, that we all as introspective thinkers can immediately observe, here are references I take seriously (Bernard Williams and John Searle from the 20th century):
https://archive.org/details/problemsofselfph0000will/page/n7...
https://archive.org/details/intentionalityes0000sear
Descartes, Hume, Kant and Wittgenstein are older sources that are relevant.
[edit] Clarified that Williams and Searle are 20th century.
>When will computer hardware match the human brain? (1997) https://jetpress.org/volume1/moravec.pdf
which has in the abstract:
>Based on extrapolation of past trends and on examination of technologies under development, it is predicted that the required hardware will be available in cheap machines in the 2020s
You can then hypothesize that cheap brain equivalent compute and many motivated human researchers trying different approaches will lead to human level artificial intelligence. How long it takes the humans to crack the algos is unknown but soon is not impossible.
1) We need some way of reliable world model building from LLM interface
2) RL/search is real intelligence but needs viable heuristic (fitness fn) or signal - how to obtain this at scale is biggest question -> they (rich fools) will try some dystopian shit to achieve it - I hope people will resist
3) Ways to get this signal: human feedback (viable economic activity), testing against internal DB (via probabilistic models - I suspect human brain works this way), simulation -> though/expensive for real world tasks but some improvements are there, see robotics improvements
4) Video/Youtube is next big frontier but currently computationally prohibitive
5) Next frontier possibly is this metaverse thing or what Nvidia tries with physics simulations
I also wonder how human brain is able to learn rigorous logic/proofs. I remember how hard it was to adapt to this kind of thinking so I don't think it's default mode. We need a way to simulate this in computer to have any hope of progressing forward. And not via trick like LLM + math solver but some fundamental algorithmic advances.