Posted by i5heu 2 hours ago
This is what happens in thought-isolation. It isn’t better than educating yourself, whether that education involves AI or not.
Phillip Kitcher is known for epistemic monoculture, Dawkins and then Henrich popularized collective intelligence and cultural evolution.
The thing about these fear pieces is concepts like the hollowed mind are reductive and that reductionism is based on a reductive view of (usually other) people.
But what actually happens is we have formalized processes and can externalize them. This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.
- Gemini
LLM tell right there.
> - Gemini
Yes, we already know. I suppose you think posting AI slop in this context is funny. It isn't.
Also, no, the observation is not sharp. You're being gaslighted and having your cock fluffed by a machine.
A typical deli sandwich in the US should be enough to last any normal person three days. Same goes for e.g. ice cream from Shake Shack (random example I know, but one I came across recently). If you buy one of these and eat them in one sitting, the answer to "why am I obese" is simply "you eat way too much."
When AI gains true marketshare in the "think-space", I have zero trust that the corporate overlords controlling these machines will use them in the fairest interests of humanity.
I've been working on a project and using LLMs heavily to inform my design decisions. There's already a long list of cases where it has taught me things I wasn't familiar with, alerted me to possibilities I didn't consider, shown me how to do things that I was struggling with. In those cases I ask for references, and it delivers.
This is not "endangering human development". If anything, it's the exact opposite - allowing human knowledge to be transmitted to other humans in an accessible way that otherwise, usually simply would not have happened.
Of course, this all depends on using AI to enhance cognition and access to knowledge, as opposed to just letting a machine write all your code for you without review, Yegge-style.
I'm not saying there isn't a moral dimension to all this, and areas of serious concern. But the one about "endangering human development" is wholly in our individual hands. You can use AI to help you learn, or to replace the need to learn. The former will be better for human development.
One real lesson from this is perhaps that we need to teach people how to use AI in ways that benefit their development, not just their output.
I think for a lot of of us the problem is that this is not a given. It’s often promised and rarely occurs, especially in the modern era. Increased productivity usually just means increased demands in the workplace.
Some points:
1. Technological inventions are not repetitions of the same phenomenon. Each invention is its own unique event, you cannot generalize the experience with previous inventions to understand the effects of the latest ones.
2. Socrates may have been in large degree right. Imagine that you and your society has been locked in the sewers, condemned to wade in shit for so long that you and your ancestors long ago forgot what fresh air feels like. What would you think about your life? Would you think "this is horrible" or "this is fine"? Or maybe "I enjoy smell of shit and we're so much better off because we don't have to worry about sunburn"?
Cumulatively, knowledge work (including, in particular, curating knowledge) is exceptionally energy intensive from an evolutionary standpoint. It does pay dividends, clearly, but to get compounding effects from it, being able to efficiently pass down big corpora of facts, ideas, processes, etc., is an absolute necessity.
Writing systems are the fundamental way through which we can do this. They worked for us for millennia, and we eventually built upon them to develop encodings used today to store information remarkably densely.
Writing systems are ‘a’ fundamental way to pass down large collections of facts, and my personal bias. We are prejudiced and naive though:
- Those knotting systems in China and South America that preceded writing for millennia are also persistent and intricate
- Cave paintings are quite dense, drawings and art are direct visual representations with compound meanings (seasonal behaviour, hunting strategy, creation myths)
- Iconography of all forms persists a rich visual language, hieroglyphics and equivalent which carry deep social instruction with verbal reinforcement
- Stories with self-correction have many-tens-of-millennia consistency categorically outstripping any other medium we have tested, the aboriginal dream-stories capture humanities shared storage during its global expansion
- Music is math. Song and dance captured all of the above in self-verifying and correcting fashion for hundreds, hundreds of millennia before that.
And before we hit any complexity arguments, like a hard specification:
a) those formats leveraged human pattern recognition and meat-based compression (ie “every chunk in the 4,000 page OOXML specification is as simple as do-as-Word-did…”)
b) find video of African dance/drumming ceremonies — density is not the issue — a special hoot, a known drumbeat… there were continental signalling networks that terrified Colonial explorers.
There is an argument that writing allows for corrosive decontextualization. Jesus cursed a fig tree. No one learning that tale the old ways would snicker. And, thus, history becomes not a tale, but a grab bag of a child’s letter blocks, you can spell anything you want.
2. Imagine a hunter gatherer is time travelled to 2026. You have lunch go to a cafe with him, and he learns that food is cheap, delicious, and abundant. He sees your house, and thinks it's amazing compared to his cave. He thinks that 2026 must be absolute paradise. You explain to him, well kinda, but also not really. Is the hunter gatherer right?
He sees you spend your day working but rarely get to go outside or do anything active. Even when you're not working you sit behind a desk staring at a screen.
He wonders why you bother will all the technology when it made your life worse. Is he right?
I don't remember phone numbers anymore. If I were to lose my phone, or the cloud, I'm SOL re-adding everyone.
I remember a few numbers of my most direct contacts and depend on backups for everything else.
This is how I for one understood this.
id probably start with "who locked us in this sewer?"
Changes on what humans need to remember what to do have, for as far as we have written records, changed the skills humans hone over time. They change our fitness function. Some of those changes are bad for a while, and then get better. Others are just far better at all times. Others might get rejected. Either way, it takes a long time before we know what the technology does to us: See how cheap printing is directly linked to wars of religion.
So it's not that AI could not be bad in the short run, or even in the long run: It appears to be the kind of technology where one cannot evaluate without significant adoption, and at that poing, we are in this rollercoaster for a while whether we want it or not. See social media, or just political innovation, like liberal democracy or communism. We can make guesses, but many guesses made early on look ridiculous in hindsight, like someone complaining about humans relying on writing.
Writings are subject to known biases such as publication bias, and so relying on them reduces the range of what you can consider.
Therefore, writing is bad for the same reasons that this post thinks that AI is bad.
https://classics.mit.edu/Plato/phaedrus.html#:~:text=there%2...
Looks like even back then, they went "cool story bro" on that text...
This could be describing an internet argument where both parties google for expert articles that seem to support their point of view without really understanding anything about the subject.
Likewise with AI the appearance of reasoning without the substance could lead to boring exchanges of plausible slop rather than meaningful discourse.
Simply put at humanity wide scales written information is by far the most important thing you can have. There is kind of a Sortie's paradox occurring where you have individual knowledge that can be held by one person conflicts with systems knowledge that has to be redundant and can be easily transferred.
Before written word, the uneducated had to just take the words of the (apparently) wise as an authority on all matters, and the only access to their knowledge was through conversation with them. That's gatekeeping and siloing in one go.
And authorities' thoughts themselves often form 2D slices of knowledge once they stop continually updating themselves in the know on SotA. Even if they do keep themselves updated, each conversation you've had with (what a layperson can recollect of it) is a thin 2D slice of that knowledge.
I can think of practically no ways that written expertise is not better.
I’m not sure where LLMs lie on that spectrum. They allow faster access, but it also feels more limited.
Also thanks to Mia (she/her), this was a very interesting read.
I was thinking about this recently: The difference between systemic (systematic) learning and opportunistic learning.
AI enables opportunistic learning, or Just-in-time (JIT) learning. It give the impression of infinite knowledge.
Most general concepts are well within the grasp of human understanding.
My curiosity RE the difference between systemic v opportunistic learning was the effect of longer-termed exposure/use to a tool that enables opportunistic learning.
"how do I fix a clogged toilet?" would be bad..
And if the LLM gets that wrong? It's his job to know the codes or how to go to a reliable resource to find out the correct codes.
The first prompt style is I think a way society towards drifts incidentally towards a less interesting one, with less variety in solutions. The second one i think allows people to still exercise their potential to try a variety of things and keep that variety.
her plumber offloaded to chatgpt.
"i just think it's good for humans to know how to do stuff."
are we talking about your sister or her plumber?
A) test lots of skills that are common but not universal. I'm thinking javascript trivia here, where I don't write any javascript in my professional capacity as a software engineer; but there are many people who think Software Engineer == Javascript Programmer
B) shine too much of a light on the fact that this industry is full of people who demand high salaries but can't program their way out of a paper bag
Without further knowledge of what was going on it's hard to say why they used ChatGPT.
Yes
some knowledge is likely "cached" in the plumber. maybe he doesn't ask the same question twice. i'm sympathetic to the plumber, but i think your concerns of erosion of knowledge or skill are worth pushing on further.
In the comments of this HN post, there is a dead comment from someone who posted an LLM's summary of another comment. It's dead because it offers very little/no value: that summary could be obtained directly from ChatGPT by anyone who wants a summary.
The sister offloaded plumbing to the plumber under the economic principle of comparative advantage. The plumber undermines the value they provide by outsourcing yet again. What value is provided by the middle man who does nothing but proxy the issue? Is the person who does this really a plumber? Is a plumber merely someone who has plumbing tools like wrenches and pipe tape?
That the plumber also wanted to outsource it is the concern: right now, the plumber is able to make money because of the difference between what is charged to deal with a problem and what it costs for them to deal with it. Knowledge and experience has become a commodity, which we probably can't do anything about, but along with that comes all the drawbacks (and advantages) of things, and humans, being comoditized.
Experts look things up all the time, because no one can hold all the knowledge of a field in their head. Being an expert means being able to know what to look up and how to use the information retrieved from looking something up.
In the plumber example, ChatGPT is going to tell them to do things using the terminology that plumbers know, and tell them to do tasks that plumbers know how to do. The sister would have to continually look up more and more things about how to do basic plumbing tasks, rather than just looking up particular novelties.
The plumber who turned up leave without fixing the problem,
The plumber fixing something that he didn't know how to do by looking up the answer.
The plumber attempting to fix something that they didn't know how to do.
While it's great to have the plumber who knows how to do everything, they are rare and in high demand, so cost way more than you can afford.
Obviously you can have a plumber that knows his stuff and the one that doesn't. The good one can check some details and will recognize bs. If you already have the bad one it's probably if better if he uses LLM rather than when doesnt.
Either way though I think there's a much simpler way to express what she's trying to say. Offloading thinking to AI is bad because it's less flexible and doesn't easily update its reasoning with new information.
I also don't believe that everybody I know is idiosyncratic in the way they view the world. And even if they were, I'd probably just pay attention to the things that are directly relevant to me. So probably I'll misunderstand most of what they say anyway.
Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.
A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.
AI is just current scapegoat.
That is, of course, provided that you pay attention it actually does research. In their current state, LLMs are practically useless for this purpose for the vast majority of users, as no one knows how they work, what to watch out for, what the failure modes look like, and how to keep nonsense apart from facts when both are presented with an equal amount of conviction. That’s not a user problem, it’s an education problem.
Nothing about the nature of evolution implies our current cognitive processing is ideal/sacred and shouldn't ever change.
Which is partially how we found ourselves in the midst of an obesity epidemic.