Posted by ForHackernews 2 hours ago
I see students obsess every day over their SAT scores, which to some is a measure of individual intelligence. But what SAT score would a pair of students working together on a single test get? Or a dozen students working together? Would it be higher or lower? What sort of strategies would maximize their ability to collaborate? What would be the effect of giving/removing access to a calculator on a student's score? Access to scratch paper? Access to textbooks? Access to a dictionary? Access to unlimited time?
If we want to claim to understand intelligence, these are the sort of questions we should be able to answer. Can we?
At that point, we can only define it as something that causes this observation. And that is not very useful.
I didn't mind that there was a typo.
~ intelligence
Example it can easily do moderately advanced calculus, that is way better than the average human.
If I read thousands of books that explain the details of another civilization in another galaxy, very thoroughly and consistently, but it it just happens to be all made up - did I gain knowledge? More importantly, does what I have in my brain now flip from being fiction to being knowledge if that civilization flipped from not existing to existing? How so, if nothing in my brain, or how I live out the rest of my life, changes in the least, if not a single atom in this galaxy changes (let's ignore that gravity has infinite reach and all that, for the sake of argument)?
If yes, how? What in your definition of knowledge makes that possible?
Edit: for "not in the training data" yes, humans generally can't know what they can't know.
Internet started it, hopefully LLMs will finish it.
Now with LLMs lowering the average through cognitive offloading and skill atrophy, prepare for it to get a whole lot worse.
Of course, the models are not intelligent. Their generated output reflects the statistical average. And in averaging more and more, you lose a lot of information.
The author argues that overreliance on AI will degrade the overall intelligence of human society, creating a negative feedback loop where future models train on increasingly degraded human data. I agree with this perspective to some extent. However, to definitively claim that human intelligence will only decline is overly simplistic. Rather, we might be about to witness a different facet—or the flip side—of what we have traditionally defined as intelligence.
Socrates once argued that the invention of writing would degrade the essence of human thought and memory. It is true that our capacity for raw memorization declined, but the act of recording enabled knowledge to be transmitted across generations. Couldn't LLMs represent a similar evolutionary trajectory?
It is undeniably true that LLMs atrophy certain cognitive muscles. However, I believe they catalyze development in other areas. In modern society, human discovery and knowledge are effectively monopolized by specific cliques. Without access to prestigious Western journals or incumbent tech giants, the barrier to entry is immense. The open-source community is no exception. For non-native English speakers, breaking into the open-source culture to access shared knowledge is notoriously difficult. But now, by spending a few dollars on an LLM, I can access the collective knowledge of that open-source ecosystem, translated seamlessly into my native language.
There is an old adage in the Korean Windows community: 'Linux is open, but it is not free.' And it’s true. To use Linux, you had to memorize arcane commands, and due to the lack of proper Korean documentation, the learning curve was vastly steeper than Windows. That very learning curve acted as a gatekeeping wall. LLMs explicitly dismantle that wall.
But this dismantling is a two-way street, and it exposes a fatal flaw in the author’s reliance on Shumailov’s 'Model Collapse' theory. The author claims AI compresses the tails of the data distribution, erasing minority viewpoints. What this ignores is that LLMs act as a conduit for cognitive diversity from the non-Western periphery. When a developer in South Korea or Brazil uses an LLM to translate their culturally embedded logic and problem-solving approaches into fluent English, they are injecting entirely new cognitive patterns into the global corpus. This does not compress the tails of the distribution; it actively thickens and extends them by capturing the 'social mind' of populations previously locked out of the internet's primary, English-dominated datasets.
Furthermore, LLMs function as a tool to re-evaluate things we've historically taken for granted—especially in areas that are too complexly intertwined, socio-politically loaded, or vast for the human mind to fully map. Take DeepMind's AlphaDev discovering a faster sorting algorithm as an example; it was a breakthrough achieved precisely because it reasoned from an alien, non-human perspective.
Human learning is fundamentally bottlenecked by environment and bias. Anyone who has interacted with academia knows it is riddled with pervasive prejudices and systemic inefficiencies. In South Korea, for instance, there is an entrenched bias that only researchers with US pedigrees are legitimate, and only papers in specific Western journals matter. This prejudice has prematurely killed countless promising research initiatives. It makes you wonder if the metrics we have long held up as 'superior' or 'correct' are actually deeply flawed. Modern society is too complex for the 'lone genius' model; paradigm shifts now require the intertwined research of multiple collectives. Yet, during this process, political interests often cause dominant groups to gatekeep and exclude others, completely regardless of scientific efficiency. In this context, an AI that lacks our inherent socio-political biases and optimizes purely based on probabilities can actually drive true breakthroughs.
Given all this, the absolute claim that AI unconditionally degrades human intelligence feels flawed. I seriously question whether the 'total sum' of human intelligence is actually experiencing a meaningful decline. Before making such claims, we desperately need to define what 'intelligence' actually means in this new context. The fatal flaw in current AI discourse is the complete lack of nuance—there is no middle ground. Everything is framed as a binary: either purely utopian or purely apocalyptic.
Speaking from personal experience, my cognitive muscle for writing raw code has atrophied because of AI. However, as a non-native English speaker, I used to struggle immensely with naming conventions. Now, my variable naming and overall architectural design capabilities have vastly improved. Conversely, I acutely feel my skills in manual memory layout management and granular code implementation degrading. The trade-off point will be wildly different for every individual.
Whenever I read doom-saying articles like the author's, I can't shake the feeling that they are simply projecting their own subjective anxieties and trying to pass them off as a universal conclusion
I fully expect our future to involve PhD factories where doctorates label AI output for the most competitive rates possible.
The majority of us will have to contend with an information environment that is polluted and overrun.
I’ll argue with that the internet pre social media was the “healthiest” in terms of our digital commons.
The reason being, when taught how to effectively disagree all these counterfactual concepts that AI loses manifest, they are logically necessary. But if people are not taught how to explore the landscape of ideas, they become "fascists for the common" and literally create the hellscape civilization we are all trapped within.
This in itself is negative, but the ramifications are profound: the landscape of ideas is never realized by a material percentage of the students. And those who could have contributed worthwhile insights have been taught to not contribute.
Once again LLMs will have to be bound to a source of entropy or feedback of some sort as a limit. Sure you might be able to throw terawatts of cycles at say music production but without examples of what people already like or test audiences you cannot answer the question of whether it is any good.