Posted by Stwerner 12 hours ago
Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project… think the fiction equivalent of all the markdown files we use for agentic development now. After that, this was about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms. Happy to answer any questions about the process too if that would be interesting to anybody.
I did not realize this was AI generated while reading it until I came to the comments here... And I feel genuinely had? Like "oh wow, you got me"... I don't like this feeling.
It's certainly the longest thing (I know about) I've taken the time to read that was AI generated. The writing struck me as genuinely good, like something out of The New Yorker. I found the story really enjoyable.
I talked to AI basically all day, yet I am genuinely made uneasy by this.
Thinking deeper, it seems prudent that we tag submissions like this with a prefix. Example: "LLM: ". This would be similar to "Show HN: ". While we cannot control what the original sources choose to disclose, we can fill that gap ourselves.
My point: I agree with you: It is misleading that the blog post does not include a preface explaining it was written by an LLM (and ideally, the author's motivation to use an LLM). However, it is still a good blog post that has generated some thoughtful discussion on HN.
Without the inferred writer, it's much less interesting to me, except as a reminder that models change and I can't rely on the old tics to spot LLM prose consistently any more.
Quite honestly, I do that sometimes too -- but I _know_ that it's unreasonable.
Humans are designed to form emotional connections with non emotional things. Its sort of our whole deal.
I find it interesting to ponder. We look at the luddite movement as futile and somewhat fatalistic in a way. I feel like the current attitude towards AI generated art will suffer the same fate—but I'm really not quite sure.
https://www.vice.com/en/article/luddites-definition-wrong-la...
How much did you pay for the shirt you're wearing now?
By arguing for letting humans work, particularly quality work, you're not especially finding a middle ground, more adopting the 1811 position of the OG Luddites who were opposed to being put out of work.
1) It’s good in the long run that they didn’t prevail at that time.
2) They did actually, in fact, have a point.
When AI can write convincingly enough, it is basically a honeypot for human readers. It looks well-written enough. The concept is interesting and we think it is going somewhere. The point is that AI cannot write anything good by itself, because writing is a form of communication. AI can't communicate, only generate output based on a prompt. At best, it produces an exploded version of a prompt, which is the only seed of interest that carries the whole thing.
Somebody had that nugget of an idea which is relevant for today's readers. They told the AI to write it up, with some tone or setting details, then probably edited it a bunch. If we enjoy any part of it, we are enjoying the bits of humanity peeking through the process, not the default text the AI wrote.
Or, I digress, it will be distinguishable from human work but because it's so much better than anything that a human could have ever created. These AI tools that we have now are as dumb as they will ever be. If we ever reach AGI or superintelligence or whatever—or even if not, even if these tools just advance for 10 more years on their current trajectory—it's easy for me to imagine some scenario where the machines can generate something so perfect to your liking that you just prefer it to anything a human ever would have created, storytelling and all.
You can take the general case where AI can just generate a better movie than a team of humans ever could plausibly generate. After all, AI doesn't have any of the physical constraints of a movie studio—the budget, the logistics of traveling from location to location, the catering, the fact that the crew has to sleep, has to coordinate schedules, all that. AI, with some human involvement or not, could just keep iterating on some script on a laptop overnight until its created an optimized version which is more satisfying to humans than any other human made movie ever created. Or in a narrow case it could create the perfect movie for you, given what it knows about you and your interests. All human movies would look inferior.
For my kids, who I'm sure are going to grow up in a world where this type of art is embedded everywhere—and where the human version is almost certainly going to be worse—I don't think the desperate cries to see the last scrap of human ingenuity will mean anything. All of these people throwing rocks at Waymos and others boycotting companies for generating ads rather than shooting one with a video studio; it's so obviously helpless, desperate and obviously futile in the face of what's coming.
I mourn the future that seems plausible here but I also welcome it as inevitable. The technology is coming, and people are going to have to adapt one way or another.
When I'm listening to music, looking at art, seeing a play or a short film I want to feel connection to the humans behind it. AI is by definition missing that connection. That's what makes me retrospectively vomit at AI writings like these. That connection requires that the humans behind it are imperfect, the solo can have one or two sloppy notes, but at least it's genuine interaction. We have seen this same yearning for connection with all the "Don't use LLM to comment, use your true style of writing with its flaws" rules.
I'm 100% certain mainstream studios will be producing "perfect" content with AIs just like current mainstream pop stars have 10 ghost writers working on each song to create "perfect" songs. The good stuff will exist in the fringes as always and I'm ok with that as I've already been for years.
And the future may not be as settled as you think it is. Leaders try to sell you their vision of the future by saying it is settled and that things are certain, but that is because they want you to believe that, because if you and the masses believe so, it's more certain for the future to settle the way the leaders want. But you can also actively refuse that future and find a different future that's worth believing in yourself.
So what is AI bringing to the fans of these genres that the fans might value? Because it's not authenticity nor is it skills. What is the point you're trying to make?
> I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.
If you assume you're reading something from a person with intention and a perspective, who you could connect with or influence in some way, then that affects the experience of reading. It's not just the words on the page.
(* "Cat Person" honestly felt like the literary equivalent of Rickrolling; I would have stopped reading it after the first page if not for my friend's glowing endorsement.)
but if you knew it came from a human it would be interesting as a window to learning what the writer was thinking
since there is no writer such window doesn't exist either
But LLMs don't have potential. You can make an LLM write a thousand articles in the next hour and it will not get one iota better at writing because of it. A person would massively improve merely from the act of writing a dozen, but 100x that effort and the LLM is no better off than when it started.
Despite every model release every 6 months being hailed as a "game changer", we can see from the fact that LLMs are just as empty and dumb as they were when GPT-2 was new half a decade ago that there really is no long term potential here. Despite more and more power, larger and hotter and more expensive data centers, it's an asymptotic return where we've already broken over the diminishing returns point.
And you know, I wouldn't care all that much--hell, might even be enthusiastically involved--if folks could just be honest with themselves that this turd sandwich of a product is not going to bring about AGI.
You cannot even get angry or upset if you disagree with anything in the story, maybe the author’s despicable worldview permeating through the characters... because there's no author’s worldview, because there's no author. It's a window into nothing, except perhaps the myriad of stories in the model's training set.
I want to at least have to option of getting upset at the author.
And if you've read literally any science fiction you will know the myriad ways that could be absolutely terrible for us
But there was nobody there, and I'm only disappointed in myself for not noticing.
Read my comment below for a perspective.
For me, the answer to this riddle is very easy: I want to engage with other human minds. A robot (or AI) doesn't have a human mind, so I'm not interested in its "artistic" output.
It was never about how good it was. Of course AI slop adds insult to injury by being also bad. Currently. But it'll get better. My position was never that AI art (shorts, pictures, music, text) is to be frowned up because it's bad. I don't like it because it's not the expression of a human mind.
It's a bit like how an AI boy/girlfriend is not the real deal, no matter how realistic -- and I'm sure they'll get uncannily realistic in the future. They aren't the real deal because there's no real human behind the facade of companionship.
Very few humans have managed this. This text is at the average level of "i want to pass the message and i'm trying to write professionally".
With stories that shared experience is between author and reader. Book clubs etc will try to extend that "shared experience" but primarily it is author <-> reader relationship.
Remove that "shared feeling with the author" and what meaning does it have?
It means, "Wow. Cool. I'm a member of a species that taught rocks to think. Holy fuck. That's pretty insanely fucking awesome. Wow. Wow, wow, wow. Fuck."
That's about all it means. Nothing was removed from your life, but something optional was added.
It has absolutely made my life worse not better
And yet, in ironic counterpoint, there is a different artist I follow on Spotify that does EDM-fusion-various-world-genres. And it’s very clearly prompt generated. And that doesn’t bother me.
My hypothesis is that it has to do with how we connect/resonate with the creations. If they are merely for entertainment, then we care less. But if the creation inspired an emotion/reasoning that connects us to other humans, we feel betrayed, nay, abandoned, when it comes up being synthetic.
Personally I have an uneasiness with it and are correspondingly cautious. Often after a review and edits it loses that "smell". I kind-of felt the same about NPM and package managers for a long time before using it became obligatory (for lack of a better word).
Are we conditioned to use other people's code unthinkingly, or is it something else?
With AI-generated text where there is this disconnect between the audience and the prompter who has an idea but not the skill to express it. Would you say reading an English translation of Dostoevsky is similar because you're connecting with the interpreter rather than the actual author? Or something as simple as an Asterix comic where the English translation is rarely literal but uses different English plays on words?
I wouldn't go as far as can't, but in general it won't be, and if any ideas are indeed communicated, they will be impersonal.
>With AI-generated text where there is this disconnect between the audience and the prompter who has an idea but not the skill to express it. Would you say reading an English translation of Dostoevsky is similar because you're connecting with the interpreter rather than the actual author? Or something as simple as an Asterix comic where the English translation is rarely literal but uses different English plays on words?
I can think of a better example. In comic circles there's the rewrite, which is when an editor isn't fluent in the original language, and so instead of actually translating, they just rewrite all the dialogue to something that matches the action. People (generally) hate rewrites. Unknowingly reading a rewrite provokes a similar feeling of betrayal that unknowingly reading LLM output provokes.
It feels great to use.
It feels terrible to have it used on you.
But I deeply feel that art only matters if there is an artist. The artist wants to convey something.
What makes you uneasy (if you are like me) is that a machine deliberately created emotions in your brain. And positive emotions, at that. It’s really something I can’t stand.
Of course this has always been a bit of a problem with digital art trying to mascarade as the real thing... I always think of programmed drums using real drum samples. In my adult life I found out that an album I loved as a teenager that listed a real drummer as the performer was actually 100% programmed (this was an otherwise very "organic" sounding heavy guitar album). I always had my suspicions since it was so perfect but I experienced exactly what you are describing. I also never got over it.
One of the many things I love about art is when I encounter something that speaks to emotions I've yet to articulate into words. Few things are more tiring than being overwhelmed with emotion and lacking the ability to unpack what you're feeling.
So when I encounter art that's in conversation with these nebulous feelings, suddenly that which escaped my understanding can be given form. That formulation is like a lightning bolt of catharsis.
But I can't help but feel a piece of that catharsis is lost when I discover that it wasn't a humans hand who made the art, but a ball of linear algebra.
If I had to explain, I guess I would say that it's life affirming to know someone else out there in the world was feeling that unique blend of the human experience that I was. But now that AI is capable of generating text, images, music, etc. I can no longer tell if those emotions were shared by the author or if it was an artifact of the AI.
In this way, AI generated art seems more isolating? You can never be sure if what you're feeling is a genuine human experience or not.
This is what the deconstructionists were preparing us for, I guess. The author is dead, and if not dead, then fake. It was never a good idea to tie our sense of meaning to external validation.
The humanity immanent in the text came from you, the reader, not the author, and it has always been that way. Language never gave us access to the author's mind -- and to the extent that statement is wrong, it doesn't matter. AI is just another layer of text, coming between the reader and the same collective consciousness that a human author would presumably have drawn on. The artistic appreciation of that text is the sole privilege of the reader.
I think if you left it to its own devices, some of the narrative exposition stuff that humanized it would go off the rails
It's really interesting to hear about others that have been exploring generating fiction with Claude. I clearly need some more work based on some of the comments, but it has been really interesting discovering and coming up with different techniques both LLM-assisted and manual to end up with something I felt confident enough about to put out.
I'd be curious to hear more about your experience!
and other stuff... it's not that good.
It felt like it was written by someone trying to quit an addiction to Corporate Memphis content spam. Like it came from some weird timeline where qntm was a LinkedIn influencer. It straddles an uncanny valley of being a criticism of the domination of The Corporation over human culture while at the same time wallowing in The Corporate Eunuch Voice, not because it's a subversion of form, but because it knows no other way.
I then came to the comments section and found the piece that brought the picture into focus.
It's just... hard to explain the specific kind of disappointment. Perhaps there is a German phrase-with-all-the-spaces-removed kind of word that describes it succinctly. I feel like I exist in this Truman Show kind of world where everyone is trying to gaslight me into thinking LLMs are important, but they aren't very good at it and whenever I try to find out how or why, it all evaporates away. I was very reluctant to say that because I'm sure it's going to come with a heaping side of Extremely Earnest Walruses ready to Have A Debate about it and I just don't have the energy for it anymore. That's the baseline existence right now. It's like a really boring version of Gamergate.
And then this thing comes along. And yeah, it's a thing. You got me. Ha. Ha. Joke's on me. I lost the shitty, fake version of the Turing Test that I didn't even ask to be a part of. And it reminds me of the Microsoft Hololens: a massively impressive technological achievement that was ultimately a terrible consumer experience. Like if you figured out Fusion Power but it could only power Guy Fieri restaurants.
Ever since the pandemic I've been keenly aware of the complete destruction of every enjoyable social structure around me. The meetups that evaporated. The offices we essentially squatted in that suddenly turned Extremely Concerned about what people were doing. The complete lack of any social interaction at work because we're all so busy because we're running at half-workforce and pretty sure the executive suite is salivating at the bit to lay the rest of us off. The lack of care about how this is impacting open source software. The lack of concern for people.
I feel like my entire adult life was this slow, agonizing, but at least constant push forward into recognizing the humanity in others and creating a kind and diverse world and then over night it's all been destroyed and half the people I see online are cheering it on like it's Technojesus coming to absolve them of their sins of never learning to invert a binary tree. Where the blogs and books and startups of the early 2000s were about finding the hidden potential in people--the college dropout working as a barista who just needs someone to give them a chance to be a programmer or a graphic designer or an artist or whatever--the modern era seems to all be about the useless middle management guy who never had any creative bone in his body no longer having to write status reports to his equally mendacious boss on his own anymore.
We might be restarting old coal plants, but at least Kevin in middle management gets to enjoy "programming" again.
"The tool had changed. The domain had not. People who understood the domain and could also diagnose specification problems were the most valuable people in any industry, and most of them, like Tom, had arrived at the job sideways from something else."
People my age and older arrived in the software business sideways too; in my case from physics and electronics. My background in physics was a great help to me later when programming in the domain of electrical machines because I could speak both languages so to say.
Much grander people than me came into software sideways as I was reminded when reading Bertrand Meyer's in memoriam of Tony Hoare; Tony Hoare's first degree was classics at Oxford.
So perhaps we aren't entering a new phase, merely returning to our roots with new tools.
some inconsistencies that stuck out/i found interesting:
- HWY 29 doesnt run through marshfield, its about 15 miles north.
- not a lot of people grow cabbage in central wisconsin ;)
- no corrugated sheet metal buildings like in the first image around there
- i dont think theres a county road K near Marshfield - not in Marathon county at least
fwiw i think this story is neat, but wrong about farmers and their outlooks - agriculture is probably one of the most data-driven industries out there, there are not many family farmers left (the kinds of farmers depicted in this story), it is largely industrial scale at this point.
All that said, as a fictional experiment its pretty cool!
Really a great story, and to the extent it was AI-written, well... even greater.
I'm happily surprised (frankly amazed TBH) that the submitter didn't get bawled out by people flagging the post and accusing him of posting slop.
Can you elaborate on this?
Hard to imagine many occupations that have undergone more radical change in the recent past than farming. The profession is now utterly technology-dependent, and a few companies like John Deere have hastened to take unfair advantage of that. Hence the growing advocacy of right-to-repair laws.
But I was able to get through the text, it's pretty good, you did great work cleaning it up. There's just a bit more to do to my taste.
The story is good.
I am also extremely interested in thinking about where software development is going, so I really appreciated the ideas that went into this.
Since you seem open to feedback, I want to add that I felt the generated images were a negative addition. Maybe they wouldn't be if they also got a little polish - the labels in them were particularly bad.
And thanks for the note about the images, I'll take that into account! I only really just started this project and am going to keep iterating as I learn to use the tools better and I find the right visual language for it.
Since you seem in the mood to give feedback ;) If you take a quick glance at the previous story, do you feel the same way about the images in that one or was it just this one's that you found particularly unpolished?
I did read your previous story (not as polished but still interesting) and noticed in the image that linked to "beautiful but the Mandarin module has a tone recognition bug that makes it nearly impossible for non-native speakers", that the tone bug was Hebrew rather than Chinese characters. Interesting...I might have a look again and translate.
> Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project [...] about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms.
The amount of work and walltime expended sounds about right. You have discovered / stumbled upon the relatively well known but little appreciated job of a publishing editor. It takes a lot of nitty-gritty work and built up domain knowledge ("world bibles") to direct a piece of writing - and its author - to a level where you confidently believe that you have captured the intent and desired tone of the piece, while keeping it sufficiently tight, engaging and interesting / non-patronising enough for its audience.
Disclosure: did ~decade of freelance writing around the turn of the millennium, and have had the privilege of being schooled by a small group of good old-school journalists. And then had a publishing editor assigned for a separate project, from whom I learned even more about writing.
“Yeah, I updated the silage ratios. What does that have to do with milk prices?”
“Everything.”
He showed Ethan the chain: feed tool regenerated → output format shifted → pricing tool misparsed → margins calculated wrong → prices dropped → contracts auto-negotiated at below-market rates. Five links, each one individually innocuous, collectively costing Ethan roughly $14,000.
Ethan looked ill.
--
I've re-read this a few times now, and can't work out how the interpreted price of feed going up and the interpreted margins going down results in a program setting lower prices on the resulting milk? I feel like this must have gotten reversed in the author's mind, since it's not like it's a typo, there are multiple references in the story for this cause and effect. Am I missing something?
[Edited for clarity]
The per-head vs. per-hundredweight swap is actually plausible for inflating apparent costs: a dairy cow weighs 12-15 hundredweights, so a $5/head daily feed cost misread as $5/hundredweight would balloon to $60-75/head. So "feed expenses look much higher" checks out.
But then the pricing logic goes the wrong direction. Higher perceived costs -> lower calculated margin -> the rational response is to raise prices to restore margin, or at minimum flag the squeeze. Dropping prices when you think you're losing money on every unit is only coherent if the tool is running some kind of volume/elasticity model where it reasons "margins are tight, compete on price" — which is a legitimately dangerous default for spot milk contracts.
Most likely it's just a logic inversion in the story. Either the misparse inflated costs and the tool correctly raised prices (locking in above-market rates Ethan didn't notice because he was happy), or the misparse deflated costs and the tool undercut on price thinking it had headroom. Both are realistic failure modes. The version in the story mixes the two.
Fittingly, a specification error in a story about specification errors.
The premise/structure/flavor of TFA is an almost pitch-perfect imitation of that kind of voice, to the point that I immediately flagged it as probably generated. I actually think a modern person would have some difficulty even in consciously mimicking it. There's an "aw shucks" yokel-thrown-into-the-future aspect to it. Plot-wise you have rural bicycle repair shop that expands operations to support nuclear reactors and that sort of thing. Substitute any of the more atomic-age stuff for AI stuff and you're mostly there. If you have some Amazing Stories from the 1920s on your shelf then you kind of know what I mean.
Which is totally fair, I'm honestly not! I haven't read much of that myself
It was the text equivalent of hearing a singer whom you know has perfect pitch sing atonal playground songs.
Take this sentence:
Tom had been an agricultural equipment technician, which meant he’d fixed tractors, combines, GPS guidance systems, and the increasingly complex control software that made modern farming possible.
Perfectly fine, a nice set up for a next sentence, but then you get hit with this:
He’d worked for a John Deere dealership in Marshfield for eleven years.
Bad. The rhythm is all off. Minor improvement:
For eleven years he had worked for a John Deere dealership in the nearby town of Marshfield.
Minor change, really, but the fluidity of the language matters a lot and just that one sentence written that one way breaks the flow.
It's almost as if a second person interjected and wrote that sentence like a friends annoying girlfriend who won't let him finish a story without adding in her parts.
But two notes does not a music make, so let's compare that 1 minor change with a before and after of all three opening sentences:
Original:
Tom had been an agricultural equipment technician, which meant he’d fixed tractors, combines, GPS guidance systems, and the increasingly complex control software that made modern farming possible. He’d worked for a John Deere dealership in Marshfield for eleven years. Then the transition happened, and the dealership’s software repair business evaporated; the machines still needed repair, but the software on the machines stopped being something you repaired.
Modified:
Tom had been an agricultural equipment technician, which meant he’d fixed tractors, combines, GPS guidance systems, and the increasingly complex control software that made modern farming possible. For eleven years he had worked for a John Deere dealership in the nearby town of Marshfield. Then the transition happened, and the dealership’s software repair business evaporated; the machines still needed repair, but the software on the machines stopped being something you repaired.
* this is a good attempt at a work of art, but written in a generic style that detracts from it * nobody making genuinely good attempts at art like this would also write so generically * and if they were making it generic on purpose, they wouldn't be able to do it so flawlessly * oh, it must be AI
I guess I can discern the presence of a human artist, but only in the idea, which just means it was a good prompt.
I'm mildly thrown off by some inconsistencies. Carol says "I’ve been under-watering that spot on purpose for thirty years," and then a paragraph down Tom's thoughts say "Carol didn’t know that she under-watered the clay spot." Carol considers a drip irrigation timer the last acceptable innovation, but then the illustration points to the greenhouse as the last acceptable illustration. Several other things as well, mostly in the illustrations.
Are these real inconsistencies or am I misunderstanding? Was this story AI-assisted (in part or all)? Is this meta-commentary?
I guess I'm also learning the value of working with an editor from first principles... over the last couple weeks before publishing I read through and made edits to this piece at least twice a day and still didn't catch this.
I don't think that phrase means what you are trying to say here.
What it doesn't mean: - learning by doing
I believe it generally means: a formalization that comes after a subject is understood so well that you can reduce it to "first principles" that imply the rest. Or, the production of a hypothesis by deduction from widely-accepted principles.
Dont know why that makes me annoyed, maybe cause its the depressing seriousness of being a 'prompter' and the americana framing of it.