seems quite wrong to me.
If anything the trend seems to go the other way - when I was younger pre internet most communication was face to face or voice over the phone.
Now the predominant thing seems text - sms, whatsapp, this text box I'm typing into now. I saw a stat the other day that online / app dating had gone from a minority to over 50% of how couple meet. And that is mostly a combination of some photos and text. Be able to write text or fade from the gene pool!
That said long form text may be different but those who write novels and the like were always a minority.
(source for the dating thing - not sure how accurate but kind of scary https://www.reddit.com/r/interestingasfuck/comments/1fzqgvk/...)
It isn't anymore, not for newer generations - e.g. Gen Z spending most of their time on Tiktok and phones, and not knowing how to use a word processor.
In the span of ~30 years pg is talking about I can absolutely imagine some job where you speak to the AI and it writes the documents for you and you never learned how to write one yourself. It will not be a good job but millions of people will hold it. They will not be able to write with much sophistication themselves ergo they will not be able to think with much sophistication either.
Online dating is not about writing. It was before Tinder, but it's not anymore. Like Instagram, it's about being skilled with photo filters and/or hiring a professional photographer. No one bothers to hire a profile writer - because no one reads the profile.
If the other person's photos are hawt you will click a button and the AI will send some funny jokes and if you're hawt too you'll share locations and shag. Idiocracy or some Eloi/Morlocks world will be real
>"Don't you hate Tuesdays?" "AHHHHHH"
so not really long-form essays. Maybe the future is that stuff?
This is a classic fallacy as old as society. “Whatever the hoi polloi are doing is by definition not the good stuff”. But long-term whatever the masses are doing always wins.
You know Shakespeare? He was the rube who thought plays could be entertaining to the masses. How quaint and silly, who would expect a commoner to appreciate a play. pfft.
Mozart? Taylor Swift of his day.
Printing press? Don’t even get me started, ew the commoners think they can just, like, write things down? How rude.
I’m as much an anti-fan of the short video communication trend as anyone, but it works. When bandwidth is cheap and video recording ubiquitous, video is a great medium. Who cares what you say, show me.
edit to add an uncomfortable truth: The in-crowd talks to develop ideas. What you see in writing is weeks, months, or even years behind the bleeding edge.
At no point did you address whether the shifting habits of younger generations will be bad for their literacy, instead making a general point that new trends in society are routinely panned by older members of such a society.
As a counterpoint, before radio and the phonograph, musical ability was quite widespread. Now, it's much rarer.
You haven't even attempted to address whether various developments in society and technology might do this to literacy the way earlier trends did to musical skills. I think that result is quite likely, by the way.
Fair, I was making a different point. Yes literacy might be reduced, my argument is that this isn’t necessarily a problem. Our abilities shift to take advantage of technology.
A lot like how we got really bad at memorizing long epics because we can just write them down instead.
That said, I don’t think writing/literacy will go away as much as we might fear. The new technologies are not a good enough replacement (yet?)
More going outside for you.
I think this is the article’s point - that this minority is going to shrink even more.
For most of history, writers were a tiny minority. It exploded 100x in the last few decades. If it goes down 10x, it's still way above where we were in the 1800s.
I know that the concept of dark ages is overblown, but still - something about relying on AI like this makes me think of the end of classical antiquity.
I agree with PG on this point and have noticed that people around me are often surprised when they receive well-written wsapp/sms messages that include proper punctuation and other linguistic markers. Additionally, many people rarely engage in handwriting today, and handwriting is known to improve clear thinking and literacy skills.
I wonder if your answer it intentionally badly written or that's a sign of the problem already affecting even us on HN :D
To begin with, the following assumption is false:
>To write well you have to think clearly, and thinking clearly is hard.
For most people, most life situations which require clear thinking have nothing to do with writing.
>This is why eminent professors often turn out to have resorted to plagiarism.
What's the percentage of such professors ? In the university I studied, there is no case of plagiarism till today. And plagiarism is not done because professors can't write, but due to other professional factors.
>If you're thinking without writing, you only think you're thinking.
As if writing is the only way to think well/correctly/effectively. My father never wrote a word: still, some of the most thoughtful statements I ever heard in my life were told to me by him during our conversations.
When you face a situation of danger, such as a wolf is running towards you: will you start to write your thoughts about what you should do, or will you just run right away and decide about the safest paths to follow while you are escaping ?
The problem with "clear thinking" is that it is subjective. I think Paul Graham and Leslie Lamport, have experienced something like this: when they sit down to write about a certain topic, they realize that their initial thoughts were not nearly clear enough, and after a number of iterations they became clearer and clearer. Most of us don't write essays, so we simply don't recognize this feeling.
You: what nonsense. Clearly, B does not necessarily require A, and yet he says it does, how poorly argued.
I meant: since most life situation where we need clear thinking do not involve writing, then we are obviously well equipped to think clearly.
And if thinking clearly is not that problematic for most people, then the author can't say we can't write because thinking clearly is hard/or we can't think clearly.
Got it ?
> "I meant: since most life situation where we need clear thinking do not involve writing, then we are obviously well equipped to think clearly."
That's not the QED you seem to think it is. The statement that "most life situation where we need clear thinking do not involve writing" doesn't give any reason to think that most people are good at clear thinking most of the time, nor whether people find clear thinking easier with the help of writing or if writing has no benefit to the goal of clear thinking. You're just putting two opinions you have next to each other and acting like one confirms the other.
And a friendly tip, "have I explained better what I meant before?" would come off as a lot more polite than "got it?", which to anyone who agrees with the rest of your comment could easily read as snide/patronising, while anyone who thinks you're still wrong will see it as smug and wrongly confident. (Apologies if English isn't your first language, in which case you're very good at it, and apologies if you didn't want unsolicited opinions on how your choice of language makes you seem in my view!)
edit to give an analogy: I feel your argument is like if somebody said "control of body movement is key to being a great athlete", and you replied "everyone is always controlling their body movement, clearly therefore it's not relevant to how good an athlete is".
> "have I explained better what I meant before?" would come off as a lot more polite than "got it?"
Thank you very much.
PS. English is not my native language.
Stephen Hawking is the first example that comes to mind.
He developed a remarkable ability to perform complex calculations and visualize intricate mathematical concepts entirely in his mind. He once mentioned that his ALS diagnosis, which limited his physical abilities, led him to focus intensely on theoretical physics, as it required more intellectual than physical effort.
But sure, writing (and drawing) is a great tool to aid in deep thinking. So are AI tools.
PG is obviously talking about the mental process of writing, i.e. of organizing a complex network of thoughts in a linear hierarchy that others can grasp, not the physical one.
You're correct here.
> Stephen Hawking is the first example that comes to mind.
The post is obviously speaking of the general population or at best average professional, and in my opinion choosing one of the most brilliant exceptional scientific minds of our lifetimes is not a good counterargument for a piece that speaks of a potential problem with society at large.
Strange example to pick as someone who did not write.
Stephen Hawking's thinking and imagination wouldn't have meant much had he not finally penned them down for others to read, and neither would his ideas have been taken seriously had he chosen to make tiktoks or podcasts to explain them instead.
You have committed the Fallacy of the Inverse.
Most of us have neither the intellect of Hawking nor his situation.
Sure some will thoughtlessly copy and paste but for many AI helps to structure their thoughts and they think clearer as a result.
a) No / little data: Whenever you are starting to think about a subject, you can ask it to give you a structure / categories.
b) Existing data: What I do very often is to give it a lot of "raw data" like unstructured thoughts or an unstructured article, then I ask him to find suitable top categories.
For me it’s very important to emphasize that AI is a tool. You have to use it responsibly. But there is no reason not to use it.
Until it's not.
I'm not the type who'd say "don't use AI". Use whatever works. Myself I became really fascinated by transformer LLMs / GPTs in winter 2019, then again when ChatGPT was published and a good few months after that.
It's just that my interest&enthusiasm has almosted vanished by now. Surely it will reemerge at some point.
This observation of Paul Graham may generalize beyond writing: modern technology appears to turn populations into bi-modally distributed populations - for example, those that write/consume human-written prose and those that produce/consume AI-generated prose; those that can afford human medical doctors and those that can only afford to consult ChatMedicGPT or Wikipedia; those that can afford human teachers for their childrens and those that let EduGPT train them, etc. Generally speaking, I expect a trend that more affluent people will use higher quality human services and the rest has to live with automation output.
It's interesting to think of humans as being like a premium service where AI's are a sort of knock-off/budget human service.
Unlike younger generations, who are growing up surrounded by AI-generated content, many of us older folks have had the experience of engaging directly with people and evaluating their competence. We developed a knack for quickly determining someone's skill level through just a few minutes of face-to-face conversation—a skill that was essential for navigating various life situations.
Now that anyone can use AI to generate seemingly competent text, videos, and more, and as in-person interactions decline, the conditions that once allowed us to gauge competence are fading. I worry that in the future, no one—including AI trained on our outputs—will be adept at making these assessments.
Those of us who take time to carefully compose arguments and revise them, as Paul suggests, will have a better handle on this, so that's a helpful consideration.
I worry strongly about a future like that in Ideocracy[1], where nobody has a clue bout actually judge competence, and instead goes with the best sound bites.
The one path out that I can see, and it's unlikely, is to teach the skill of explicitly tracking history, and reviewing how well someone predicted the future, over time.
The explicit generation and curation of a reputation is part of that priceless nexus that they'll all be seeking in future generations, and yet it'll pale in comparison with the ability to size someone up in a few minutes of interaction.
- Only a brain chip could make AI usage undetectable in practice. Without that you can tell if the person is checking his phone etc. Though you're right that an in-person interaction will be needed, otherwise there's no way of knowing what the other person is doing or if he's a real person at all... And since the latter problem (dead internet) will only grow, perhaps beyond the rectifiable, in-person communication will surely be in business again.
- Once AI replacement of competent humans has reached a certain threshold, what do you stand to gain from testing a human's level thereof? Are you interviewing for "above AI" positions? If not, relying on AI will be as normal as relying on a calculator.
I think I have a bit of this knack, in some areas, tempered by an awareness of some of my blind spots, but most people don't even claim to have this knack...
As evidence from our own field: before the explosion of LLM cheating, we had the explosion of Leetcode hazing.
Because, supposedly, good experienced software developers couldn't plausibly recognize each other just by talking with each other.
So instead we whip out these douchetastic did-you-prep-for-this rituals. And afterwards you still have no idea what the other person would be like to work with (except that you now know both of you are willing to play fratbro nonsense games).
I been intentionally changing up my candor so that people
Who get caught up in the structure, lose the message
If u know you know
Just look at the quality of presidential debates and political discourse we've been having for the past decade. Not just in the US, but all over the world. The situation is perilous.
Lagniappe: https://www.youtube.com/watch?v=EwPnJXXX5Ic
As someone said, it's no wonder The Matrix chose the 90s as the peak of human civilization.
What is the mechanism you propose, by which the birth of a child makes every person now living incrementally dumber?
(Kids, if you ever see this, I'm not saying it wasn't worth it. But seriously, 2AM every night for months?)
IQ, (from “intelligence quotient”), is a number used to express the relative intelligence of a person.
So for the whole population it is constant by definition :)
And heading in the direction of Dick and Jane doesn't mean that we ever reached it; according to https://community.jmp.com/t5/image/serverpage/image-id/8926i... recent SOTUs should be comprehensible to high school freshmen.
(full discussion at https://community.jmp.com/t5/JMPer-Cable/Regression-to-model... ; through 2018 — anyone have 2022?)
EDIT: keep in mind that the expected "general" audience for the SOTU has also expanded dramatically due to technological change in between 1790 and 2018...
It's interesting to be usually cautious but then predict something so radical, and yet with no real argument other than "AI is gonna replace us".
Painting should have been replaced by photography, but it hasn't been. In my opinion, there are still plenty of people who want to write, so there will still be plenty of people who know how to write.
And maybe my opinion is wrong, because it's an opinion. But to have to transform it to a certainty, I'd have to see much, much more data than a feeling and a conviction.
Writing may become the same thing. In the workplace, if someone is writing, they're probably doing it for their own entertainment. Some people write at home, writing journals, blogs, etc. Nobody will know that you're writing, unless it affects your thinking, and your thinking affects your work.
I think we already reached the stage where people stopped writing, before AI entered the picture. I rarely see anybody write a lengthy report any more. Reports have been replaced by PowerPoint, chat, e-mail, etc. One consequence is that knowledge is quickly lost. Or, it's developed by writing, but is communicated verbally.
Hopefully I'll live the couple of decades to find out if PG's prediction is correct, I would bet against it.
Here however, I do agree with his articulation -- "writing is thinking" -- and like you, I've thought a bit about the linear nature of writing.
My view is that the "jumble" of ideas/concepts/perspectives is just that -- a jumbled mess -- and the process of linearizing that mess requires certain cognitive aspects that we (humans) generally consider as constituting intelligence. IMO, the rapid generation of grammatically-correct + coherent linear sequences by LLMs is one reason some folks ascribe "intelligence" to them.
I liked his analogy about how the disappearance of widespread physical work meant that one now had to intentionally invest Time and Effort (at the gym) to maintain physical health. The facile nature of LLMs' "spitting out a linear sequence of words" will mean fewer and fewer people will continue to exercise the mental muscles to do that linearization on their own (unassisted by AI), and consequently, will experience widespread atrophy thereof.
I suspect thinking is similar, which brings up questions about LLMs as well. We all can now quickly write hundreds of generic business plans, but knowing what to focus on first is still the hard part.
I'm seeing that happen today with corporate documents (there's always that one enthusiast in each team who says "oh, let me improve that with [LLM]", and it's a slog to go through pages and pages of things that could be a bullet point). Quality has been trumped by mediocre quantity, and the cluelessness of the people who do this willingly baffles me.
As someone who's been writing pretty much constantly for over 30 years and both uses AI code completion to speed up first drafts (of code) but switches off everything except macOS's (pretty good) word completion for prose--and absolutely refuses to use AI to generate work documents or even my blog post drafts--this post was a bit of a "oh, so this would be the ultimate consequence of keeping all of it on" moment.
Accelerating writing (with simple word completion) is fine. But letting the AI generate entire sentences or paragraphs (or even expand your draft's bullet points) will either have you stray from your original intent in writing or generate overly verbose banalities that only waste people's time.
I use iA Writer to draft a lot of my stuff, and its style checks have helped a lot to remove redundancies, clichés and filler, making my prose a bit more cohesive and to the point. That's been around for ages and it's not LLM-style AI (more of a traditional grammar checker), but that sort of assistance seems to be missing from pretty much every AI "writing aid"--they just generate verbose slop, and until that is fixed in a way that truly helps a writer LLMs are just a party trick.
Edit: I just realised that I wrote about this at length in February - https://taoofmac.com/space/blog/2024/02/24/1600#the-enshitti...
During the typewriter era, anyone's ability to produce pages and pages of text was limited by their ability to type. Nowadays, you can copy/paste large blocks of code and thus inflate documents to enormous sizes. Which works just like sand in a gearbox of the decision-making process.
I wonder if the future of ESG, DEI and such is that one AI will produce endless reports and another AI will check if the correct buzzwords are used in correct frequency. And instead of yearly reports, they could easily become daily or hourly reports...
It would be a way to tout "allyship" on the social networks without actually doing anything substantial.
* Less than 1% of your writing will be life-changing.
* 3% will be trivial to write.
* 4% will strongly resonate with others in a way you didn’t expect.
* 5% will be quite good.
* 15% probably should’ve never been published.
* 26% will elicit a reaction you did not expect. Positive or negative.
* 28% will become vastly better because you chose to edit.
* 30% will start as one piece but finish as another.
* 40% will be good solid writing.
* 45% will do much worse than you expect when published.
* 60% of your writing will never be finished. Be ok with that.
* 100% of your writing is worth your time.
Just curious, but do you think my collective 20 years of random posts across 10+ social media platforms, often with thousands of posts per platform, has been worth my time?
May be hard to answer. My view from the same behavior: yes, worth it.
Was everything productive for a career? For a relationship with someone else? Sometimes yes. Sometimes no.
Would love to hear your thoughts on my first question!
Sounds like some of the 15% or some of the 28%. :)