Top
Best
New

Posted by i5heu 2 hours ago

AI-Assisted Cognition Endangers Human Development(heidenstedt.org)
199 points | 124 comments
svnt 52 minutes ago|
It is a quirky article but the author, instead of engaging with information sources to understand what important thoughts people have had about these topics, feels the best thing to do is introduce new terms that other terms already exist for. This is basically just inductive bias plus the AI homogenization idea producing a distribution shift.

This is what happens in thought-isolation. It isn’t better than educating yourself, whether that education involves AI or not.

Phillip Kitcher is known for epistemic monoculture, Dawkins and then Henrich popularized collective intelligence and cultural evolution.

The thing about these fear pieces is concepts like the hollowed mind are reductive and that reductionism is based on a reductive view of (usually other) people.

But what actually happens is we have formalized processes and can externalize them. This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.

drivebyhooting 10 minutes ago||
If computers are bicycles for the mind and AI are cars, I wonder what the analogue for the obesity epidemic is.
rcoveson 3 minutes ago|||
It's even more depressing than that framing would suggest, because we skipped over the decades where cars were just fast, powerful transportation tools and went straight from "mind bicycles" to "mind Teslas" full of cameras, tracking, proprietary software, and subscription fees.
gdubya 6 minutes ago||||
That is a sharp and slightly chilling analogy. If Steve Jobs saw the computer as a tool that amplified human effort (the bicycle), and AI represents a tool that automates that effort entirely (the car), then the "obesity epidemic" of the mind is likely Cognitive Atrophy.

- Gemini

antonvs 3 minutes ago||
> That is a sharp

LLM tell right there.

> - Gemini

Yes, we already know. I suppose you think posting AI slop in this context is funny. It isn't.

Also, no, the observation is not sharp. You're being gaslighted and having your cock fluffed by a machine.

antonvs 4 minutes ago|||
The obesity epidemic has much less to do with cars and much more to do with cheapness of food and volume consumed.

A typical deli sandwich in the US should be enough to last any normal person three days. Same goes for e.g. ice cream from Shake Shack (random example I know, but one I came across recently). If you buy one of these and eat them in one sitting, the answer to "why am I obese" is simply "you eat way too much."

superxpro12 13 minutes ago|||
I think we're excluding from this analysis the probability that these "AI" products will remain truly unbiased and free from external (corporate) influences.

When AI gains true marketshare in the "think-space", I have zero trust that the corporate overlords controlling these machines will use them in the fairest interests of humanity.

rpcope1 6 minutes ago||
You're absolutely right! But Brawndo has what plants crave!
antonvs 8 minutes ago|||
When I read pieces like this all I think is, resistance to change is a helluva drug.

I've been working on a project and using LLMs heavily to inform my design decisions. There's already a long list of cases where it has taught me things I wasn't familiar with, alerted me to possibilities I didn't consider, shown me how to do things that I was struggling with. In those cases I ask for references, and it delivers.

This is not "endangering human development". If anything, it's the exact opposite - allowing human knowledge to be transmitted to other humans in an accessible way that otherwise, usually simply would not have happened.

Of course, this all depends on using AI to enhance cognition and access to knowledge, as opposed to just letting a machine write all your code for you without review, Yegge-style.

I'm not saying there isn't a moral dimension to all this, and areas of serious concern. But the one about "endangering human development" is wholly in our individual hands. You can use AI to help you learn, or to replace the need to learn. The former will be better for human development.

One real lesson from this is perhaps that we need to teach people how to use AI in ways that benefit their development, not just their output.

Forgeties79 23 minutes ago||
>This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.

I think for a lot of of us the problem is that this is not a given. It’s often promised and rarely occurs, especially in the modern era. Increased productivity usually just means increased demands in the workplace.

zozbot234 1 hour ago||
At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
palmotea 1 hour ago||
Oh no, not that tired thing again. I suppose your point is: people once were critical of the technology of writing, so all criticism of the technology-at-hand is illegitimate. You don't actually make a point, so one has to assume.

Some points:

1. Technological inventions are not repetitions of the same phenomenon. Each invention is its own unique event, you cannot generalize the experience with previous inventions to understand the effects of the latest ones.

2. Socrates may have been in large degree right. Imagine that you and your society has been locked in the sewers, condemned to wade in shit for so long that you and your ancestors long ago forgot what fresh air feels like. What would you think about your life? Would you think "this is horrible" or "this is fine"? Or maybe "I enjoy smell of shit and we're so much better off because we don't have to worry about sunburn"?

_verandaguy 1 hour ago|||
While I agree with your rebuke of the GP, Socrates was materially wrong about writing (or at least, about the ability to persist information beyond any single human lifetime).

Cumulatively, knowledge work (including, in particular, curating knowledge) is exceptionally energy intensive from an evolutionary standpoint. It does pay dividends, clearly, but to get compounding effects from it, being able to efficiently pass down big corpora of facts, ideas, processes, etc., is an absolute necessity.

Writing systems are the fundamental way through which we can do this. They worked for us for millennia, and we eventually built upon them to develop encodings used today to store information remarkably densely.

bluGill 1 hour ago|||
The larger win from writing is passing down things that are not commonly needed. If you hunt antelope every year I can teach my kids. If we know there are antelope "over there", but they are easy to over hunt to so we only hunt in 100 year droughts - nobody in the village will know how to hunt them when we need to and so we need writing. (never mind how we figure out that they are easy to over hunt)
bonesss 34 minutes ago|||
> Writing systems are the fundamental way through which we can do this

Writing systems are ‘a’ fundamental way to pass down large collections of facts, and my personal bias. We are prejudiced and naive though:

- Those knotting systems in China and South America that preceded writing for millennia are also persistent and intricate

- Cave paintings are quite dense, drawings and art are direct visual representations with compound meanings (seasonal behaviour, hunting strategy, creation myths)

- Iconography of all forms persists a rich visual language, hieroglyphics and equivalent which carry deep social instruction with verbal reinforcement

- Stories with self-correction have many-tens-of-millennia consistency categorically outstripping any other medium we have tested, the aboriginal dream-stories capture humanities shared storage during its global expansion

- Music is math. Song and dance captured all of the above in self-verifying and correcting fashion for hundreds, hundreds of millennia before that.

And before we hit any complexity arguments, like a hard specification:

a) those formats leveraged human pattern recognition and meat-based compression (ie “every chunk in the 4,000 page OOXML specification is as simple as do-as-Word-did…”)

b) find video of African dance/drumming ceremonies — density is not the issue — a special hoot, a known drumbeat… there were continental signalling networks that terrified Colonial explorers.

There is an argument that writing allows for corrosive decontextualization. Jesus cursed a fig tree. No one learning that tale the old ways would snicker. And, thus, history becomes not a tale, but a grab bag of a child’s letter blocks, you can spell anything you want.

programjames 20 minutes ago||
[dead]
DiscourseFan 26 minutes ago||||
As to 2., the whole of this narrative in the Phaedrus is ironic, considering it depends on the written word for its transmission, this dialouge being fully reported by Plato, filled with literary allusion, dramatic setting. Cf. "Plato's Pharmacy," by Derrida, and the work of his student, Bernard Stiegler.
gallerdude 1 hour ago||||
1. You can't understand the nuances, but there is a general pattern: new inventions may make us slightly less proficient at specifics, yet more powerful overall

2. Imagine a hunter gatherer is time travelled to 2026. You have lunch go to a cafe with him, and he learns that food is cheap, delicious, and abundant. He sees your house, and thinks it's amazing compared to his cave. He thinks that 2026 must be absolute paradise. You explain to him, well kinda, but also not really. Is the hunter gatherer right?

AlecSchueler 1 hour ago|||
Alternatively he sees that you live in your house alone and feel lonely all the time. Maybe you have a small family and a few friends but it's nothing compared to the tribal life he knows.

He sees you spend your day working but rarely get to go outside or do anything active. Even when you're not working you sit behind a desk staring at a screen.

He wonders why you bother will all the technology when it made your life worse. Is he right?

tadfisher 45 minutes ago|||
The hunter-gatherer will wonder why you spend so much time working. He only spends 2-3 hours a day gathering and preparing food, maybe an hour maintaining tools and shelter; with the rest dedicated to leisure and social activities.
quirkot 1 hour ago||||
regarding #2: how many serfs came home after re-digging the toilet hole to eat a meal of hand-milled grain bread and old vegetables with the members of the family that survived infancy and thought "life just doesn't get any better than this"? Probably almost all of them
partyficial 1 hour ago||||
he(zozbot234) could also be agreeing with OP, not disagreeing.

I don't remember phone numbers anymore. If I were to lose my phone, or the cloud, I'm SOL re-adding everyone.

pixl97 1 hour ago|||
I mean, it's most likely because you have an absolute shit load of numbers/contacts in your phone. In the old days people just had rolodexes filled with numbers and if that disappeared they were just as screwed.

I remember a few numbers of my most direct contacts and depend on backups for everything else.

rrr_oh_man 1 hour ago|||
> he(zozbot234) could also be agreeing with OP, not disagreeing.

This is how I for one understood this.

jareklupinski 58 minutes ago||||
> What would you think about your life? Would you think "this is horrible" or "this is fine"? Or maybe "I enjoy smell of shit and we're so much better off because we don't have to worry about sunburn"?

id probably start with "who locked us in this sewer?"

hibikir 1 hour ago||||
That's quite the uncharitable view. Let's try a better one.

Changes on what humans need to remember what to do have, for as far as we have written records, changed the skills humans hone over time. They change our fitness function. Some of those changes are bad for a while, and then get better. Others are just far better at all times. Others might get rejected. Either way, it takes a long time before we know what the technology does to us: See how cheap printing is directly linked to wars of religion.

So it's not that AI could not be bad in the short run, or even in the long run: It appears to be the kind of technology where one cannot evaluate without significant adoption, and at that poing, we are in this rollercoaster for a while whether we want it or not. See social media, or just political innovation, like liberal democracy or communism. We can make guesses, but many guesses made early on look ridiculous in hindsight, like someone complaining about humans relying on writing.

tbrownaw 1 hour ago|||
Writings are fixed once written, and don't update themselves as the world changes.

Writings are subject to known biases such as publication bias, and so relying on them reduces the range of what you can consider.

Therefore, writing is bad for the same reasons that this post thinks that AI is bad.

quirkot 1 hour ago||
[dead]
alwa 1 hour ago|||
(From Socrates’ dialog with Phaedrus)

https://classics.mit.edu/Plato/phaedrus.html#:~:text=there%2...

xg15 43 minutes ago||
> Phaedrus: Yes, Socrates, you can easily invent tales of Egypt, or of any other country.

Looks like even back then, they went "cool story bro" on that text...

hdndjsbbs 1 hour ago|||
The irony of quoting this particular story without providing any of the necessary context for readers. Truly an aid to reminiscence and not memory.
charonn0 42 minutes ago|||
> they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

This could be describing an internet argument where both parties google for expert articles that seem to support their point of view without really understanding anything about the subject.

butlike 1 hour ago|||
It's just a story. Doesn't mean it's wise.
eaglelamp 1 hour ago|||
You're misinterpreting the quote. Socrates is saying that being able to find a written quotation will replace fully understanding a concept. It's the difference between being able to quote the pythagorean theorem and understanding it well enough to prove it. That's why Socrates says that those who rely on reading will be "hard to get along with" - they will be pedantic without being able to discuss concepts freely.

Likewise with AI the appearance of reasoning without the substance could lead to boring exchanges of plausible slop rather than meaningful discourse.

pixl97 1 hour ago||
I mean Socrates said enough stuff that was wrong or didn't have any scientific understanding either.

Simply put at humanity wide scales written information is by far the most important thing you can have. There is kind of a Sortie's paradox occurring where you have individual knowledge that can be held by one person conflicts with systems knowledge that has to be redundant and can be easily transferred.

user3939382 1 hour ago|||
This is actually a great criticism. Post Enlightenment we’ve come to worship the written word as a source of truth. It’s not. Thoughts, wisdom, understanding, exist primarily (and by necessity primarily) as a continuous structure in our minds. By writing, we distill and collapse this rich continuous structure into a discrete 2D slice. It’s portable which has many benefits but we tend to forget that this written word we worship in academia is a low fidelity copy created out of necessity, not because it’s optimal. In fact, much is lost this way. The hazard is that we often end up testing for mastery of this low fidelity discretization rather than the knowledge structure it shadows.
DangitBobby 20 minutes ago|||
We would literally not have access to this criticism without the written word. It would have long been lost to time. And so it is with enumerate other thoughts that happily have been recorded.

Before written word, the uneducated had to just take the words of the (apparently) wise as an authority on all matters, and the only access to their knowledge was through conversation with them. That's gatekeeping and siloing in one go.

And authorities' thoughts themselves often form 2D slices of knowledge once they stop continually updating themselves in the know on SotA. Even if they do keep themselves updated, each conversation you've had with (what a layperson can recollect of it) is a thin 2D slice of that knowledge.

I can think of practically no ways that written expertise is not better.

layer8 1 hour ago|||
On the other hand, books allow us to access a much broader selection of ideas than would otherwise be feasible.

I’m not sure where LLMs lie on that spectrum. They allow faster access, but it also feels more limited.

CamperBob2 1 hour ago|||
[flagged]
moralestapia 55 minutes ago|||
This is why I come to HN, knowledgeable people enrich the discussion so much with their unique points of view.

Also thanks to Mia (she/her), this was a very interesting read.

reg_dunlop 1 hour ago||
Impressive. Thanks for the share.

I was thinking about this recently: The difference between systemic (systematic) learning and opportunistic learning.

AI enables opportunistic learning, or Just-in-time (JIT) learning. It give the impression of infinite knowledge.

Most general concepts are well within the grasp of human understanding.

My curiosity RE the difference between systemic v opportunistic learning was the effect of longer-termed exposure/use to a tool that enables opportunistic learning.

giancarlostoro 7 minutes ago||
I think the best way I can put it is probably; this is the same as if you just cheat off someone else in school, you aren't learning much are you? AI is the same thing. Don't just cheat, use it to learn instead.
jbethune 1 hour ago||
This was a bit word-salad-y but I share the same basic concern. I think more I worry about the tendency toward greater and greater cognitive off-loading to LLMs. My sister told me a story the other day about how she caught her plumber using chatgpt on his phone to fix an issue with her bathroom. I just think it's good for humans to know how to do stuff.
hn_acc1 1 hour ago||
Sure, but.. I've been coding for 40 years and I don't know everything. To me, a LOT depends on what the plumber asked chatgpt about. For example: building codes in that city, to figure out what his options are - like, is he allowed to just put in any old toilet, or is there a gpf restriction? What's the replacement part number for faucet XYZ's gasket? Those seem reasonable.

"how do I fix a clogged toilet?" would be bad..

SirMaster 59 minutes ago|||
>like, is he allowed to just put in any old toilet, or is there a gpf restriction?

And if the LLM gets that wrong? It's his job to know the codes or how to go to a reliable resource to find out the correct codes.

Calazon 41 minutes ago|||
Hopefully he would be using the LLM as an enhanced search engine that can point him to relevant authoritative sources that he can use to fact-check its output. I have done that in the past to some effect.
bethekidyouwant 44 minutes ago|||
Maybe he just needs a reminder and he’ll have an oh yeah moment when he reads the output maybe he’ll ask it for primary sources. There’s a lot of bad faith going around.
sidrag22 56 minutes ago||||
I cling a bit to a prompt i sent a while ago about just tossing a chopped pepper into a recipe for baked ziti. I had a recipe that i followed fairly tightly with slight changes to see how they would work out each time. Instead of prompting "when should i add chopped bell pepper?" the small change of just, "what are my options for when to add chopped bell pepper?" opened up a variety of different methods i could try when returning to that recipe, and decide what i like best based on the outcome.

The first prompt style is I think a way society towards drifts incidentally towards a less interesting one, with less variety in solutions. The second one i think allows people to still exercise their potential to try a variety of things and keep that variety.

alpinisme 1 hour ago||||
Presumably in his jurisdiction he should know what official resources to consult. But the point about it depending on his question is definitely fair.
theappsecguy 1 hour ago|||
[dead]
dfee 1 hour ago|||
your sister offloaded to her plumber.

her plumber offloaded to chatgpt.

"i just think it's good for humans to know how to do stuff."

are we talking about your sister or her plumber?

jessetemp 1 hour ago|||
The plumber obviously. Not everyone needs to know how to be a plumber, but a plumber should know how to be a plumber
danielbln 1 hour ago|||
Im a software engineer and know how to be a software engineer, yet I find LLMs quite helpful. Why should a plumber be any different.
daveguy 1 hour ago||
Because if a plumber moves fast and breaks things, I've got shit all over the place.
enraged_camel 58 minutes ago||
That, and also the plumber loses their license. So perhaps the solution is professional licensing for software engineers.
bigfishrunning 15 minutes ago|||
I feel like a licencing process for software engineers would

A) test lots of skills that are common but not universal. I'm thinking javascript trivia here, where I don't write any javascript in my professional capacity as a software engineer; but there are many people who think Software Engineer == Javascript Programmer

B) shine too much of a light on the fact that this industry is full of people who demand high salaries but can't program their way out of a paper bag

davidkhess 16 minutes ago||||
I think that's coming regardless. AI just might be the perfect storm to bring the timeline in considerably.
c-hendricks 35 minutes ago|||
Engineer is a protected title in Canada after all
pixl97 1 hour ago||||
Which part of being a plumber? Was the house installed with something non-typical? Would you rather have them take an additional 30 minutes looking up their technical manual?

Without further knowledge of what was going on it's hard to say why they used ChatGPT.

b2ccb2 57 minutes ago||
> Would you rather have them take an additional 30 minutes looking up their technical manual?

Yes

NiloCK 46 minutes ago||
You know that plumbers charge by the hour, right?
neetle 1 minute ago||
How do you know ChatGPT is referencing the right information if you need to look it up in a manual?
dfee 1 hour ago|||
the question was rhetorical. but, since you responded – do you think that there are limits to who can or should use ai? if the plumber's use of ChatGPT improved outcomes, isn't that preferable?

some knowledge is likely "cached" in the plumber. maybe he doesn't ask the same question twice. i'm sympathetic to the plumber, but i think your concerns of erosion of knowledge or skill are worth pushing on further.

thwarted 1 hour ago||||
The issue here is that the sister could have used ChatGPT herself, so why bother hiring the plumber. The plumber has provided less value than was expected. But make no mistake: the value the sister was looking for was to have someone else deal with it, and there's a price that the sister was willing to pay for the service of having someone else deal with it.

In the comments of this HN post, there is a dead comment from someone who posted an LLM's summary of another comment. It's dead because it offers very little/no value: that summary could be obtained directly from ChatGPT by anyone who wants a summary.

The sister offloaded plumbing to the plumber under the economic principle of comparative advantage. The plumber undermines the value they provide by outsourcing yet again. What value is provided by the middle man who does nothing but proxy the issue? Is the person who does this really a plumber? Is a plumber merely someone who has plumbing tools like wrenches and pipe tape?

That the plumber also wanted to outsource it is the concern: right now, the plumber is able to make money because of the difference between what is charged to deal with a problem and what it costs for them to deal with it. Knowledge and experience has become a commodity, which we probably can't do anything about, but along with that comes all the drawbacks (and advantages) of things, and humans, being comoditized.

cortesoft 57 minutes ago||
This is assuming that ChatGPT had everything needed to do the work. If the plumber was asking specific questions, based on their previous experience and knowledge about what needed to be done, the sister might not have been able to get the same result from her use of ChatGPT that the plumber received.

Experts look things up all the time, because no one can hold all the knowledge of a field in their head. Being an expert means being able to know what to look up and how to use the information retrieved from looking something up.

In the plumber example, ChatGPT is going to tell them to do things using the terminology that plumbers know, and tell them to do tasks that plumbers know how to do. The sister would have to continually look up more and more things about how to do basic plumbing tasks, rather than just looking up particular novelties.

thwarted 44 minutes ago||
Yes, this is why I mentioned comparative advantage.
askonomm 1 hour ago|||
So you are saying that a plumber does not in fact need to know how to be a plumber?
Lerc 39 minutes ago|||
Would you prefer to have:

The plumber who turned up leave without fixing the problem,

The plumber fixing something that he didn't know how to do by looking up the answer.

The plumber attempting to fix something that they didn't know how to do.

While it's great to have the plumber who knows how to do everything, they are rare and in high demand, so cost way more than you can afford.

bigfishrunning 18 minutes ago||
I would prefer to have a plumber with some kind of reference that doesn't just make shit up 10% of the time -- plumbing mistakes are insanely costly (i once owned a house that was destroyed by a plumbing mistake that was made by a previous owner)
ThalesX 1 hour ago|||
I've always wanted to be more of a handy man, never knew where to start. I used LLMs to create a toolkit and then used it to fix various stuff around the house. I'm at the point where I'm comfortable with beginner projects moving onto intermediate, and I feel like the quality of my works beats those of hired help at my level of competence. So... I'm glad I could off-load some cognition to LLMs and get to the actual useful parts.
NiloCK 45 minutes ago|||
Did she also catch him with a wrench?
comboy 1 hour ago||
I mean, yes but LLMs have been making me more cognitively active. I've learned how to do more stuff that I would have without them and it's a decent multiplier not some rounding error.

Obviously you can have a plumber that knows his stuff and the one that doesn't. The good one can check some details and will recognize bs. If you already have the bad one it's probably if better if he uses LLM rather than when doesnt.

dcre 47 minutes ago||
I've never seen an argument like this that, if true, wouldn't also apply to the cognitive offloading we do by relying on culture, by working with others, or working with the artifacts built by others.
0xBA5ED 30 minutes ago|
Cognitive offloading via culture has many forms and many of them are not sustainable at all.
bomewish 2 hours ago||
Doh. I went in expecting a really cool thesis — because the idea seems somehow intuitive, or at least really intriguing. But I have no clue what I read. Just totally odd and unconvincing. Greenland? Dialectal substrate? The idea is still super intriguing to me though!
chromacity 1 hour ago||
Well, at least you know it's not AI-written because it's delightfully weird and evidently about some pet theory of the author. This day and age, that's something to unironically celebrate.
ulf-77723 1 hour ago||
I love this! Especially the part about greenland. For quite some time the dashes were a good indicator of a text was written by AI - but now the best option is to write more human like by doing it a little less polished but weird - at least the message is being transported
asdfman123 1 hour ago||
While I understand what the paper is saying I'm not sure if what I read was written by someone who is smarter than me and naturally goes higher up the abstraction tree, or just wants to write really smart things.

Either way though I think there's a much simpler way to express what she's trying to say. Offloading thinking to AI is bad because it's less flexible and doesn't easily update its reasoning with new information.

layer8 56 minutes ago||
It’s a blog post, not a paper.
gobdovan 57 minutes ago||
By the logic that today's news is fundamental to know as true, there really is no point in reading books older than 6 months old. If Einstein woke up from a coma, he'd be useless, as he doesn't even know who won the World Cup. For real now, if an AI can help you solve a problem using 2,000 years of human logic, does it really matter if it's "skewed" away from a political shift that happened three weeks ago?

I also don't believe that everybody I know is idiosyncratic in the way they view the world. And even if they were, I'd probably just pay attention to the things that are directly relevant to me. So probably I'll misunderstand most of what they say anyway.

Manuel_D 49 minutes ago||
> In early 2026, the USA prepared to invade Greenland and, therefore, the EU4. Only a few months prior to that it was completely unthinkable that the USA would even think about threatening an invasion of Greenland. As AI base models are stuck in the past, they do not easily accept these events as real and often label them as “hypothetical”, “fake news”, or “impossible”. This also affects new models like Gemini 3 Pro, GLM-5 or GPT-5.3-codex5.

Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.

A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.

darepublic 30 minutes ago||
I agree that Google kind of serves this role even before llms. But these days people delegate their reasoning, brainstorming to the computer not just lookup. And beyond our generation are those who would have grown up doing this. Therefore I think concern is justified
djrorkrmrk 41 minutes ago||
A few years ago, someone blow up a pipeline in eu. Before thwt some people lied about medical stuff.

AI is just current scapegoat.

thepasch 1 hour ago||
AI-assisted, I can see. I believe it doesn’t have to be that way, though. If you use AI as a grounding tool - essentially something that can take your stream of consciousness and parse it into a series of concerete and pointed search terms to do real-time research with instead of falling back on what’s in the weights - then it’s honestly hard to think of a technology that had the potential to be more useful in the history of the species - it gives you much more direct access to both your unknown unknowns and your unknown knowns.

That is, of course, provided that you pay attention it actually does research. In their current state, LLMs are practically useless for this purpose for the vast majority of users, as no one knows how they work, what to watch out for, what the failure modes look like, and how to keep nonsense apart from facts when both are presented with an equal amount of conviction. That’s not a user problem, it’s an education problem.

drusepth 34 minutes ago|
This is absolutely something to potentially be worried about, but one thing I never see highlighted in critiques of AI-assisted cognition is that some elements of physiology may not actually be biologically necessary if they can be fully supplanted by some replacement (in this case, new tools). I can't traverse as much land on foot as my ancestors did (my muscles are weaker, my endurance is less, etc), but I can travel even further than they could by car/plane/etc.

Nothing about the nature of evolution implies our current cognitive processing is ideal/sacred and shouldn't ever change.

lexandstuff 23 minutes ago|
> I can't traverse as much land on foot as my ancestors did, but I can travel further by car/plane/etc

Which is partially how we found ourselves in the midst of an obesity epidemic.

More comments...