Top
Best
New

Posted by Stwerner 14 hours ago

Warranty Void If Regenerated(nearzero.software)
As an experiment I started asking Claude to explain things to me with a fiction story and it ended up being really good, so I started seeing how far I could take it and what it would take to polish it enough to share publicly.

Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project… think the fiction equivalent of all the markdown files we use for agentic development now. After that, this was about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms. Happy to answer any questions about the process too if that would be interesting to anybody.

323 points | 181 commentspage 2
rswail 5 hours ago|
I'm very impressed that was written by an LLM.

Does that make the OP an "authoring mechanic"? Or an "AI editor"?

Douglas Adams had it right, the problem is not that the answer was useless, it was that people didn't know what the right question was.

yaur 3 hours ago||
> Tom pulled up the tool’s specification on his diagnostic display. This was always the first step: read the spec, not the code. Clearly this writer has never felt the frustration of CC telling them a feature was never a part of the plan, because it overwrote the plan and then compacted.
BatteryMountain 4 hours ago||
LLM's also do well with writing parables, so try something like: "write a parable about a software engineer battle against the compiler and discovering that letting go of control and letting the compiler help him build better applications. The style can be where the developer is a toad, but also a monk, and the compiler is a snake.". You can do it with any profession ("doctor vs management", "nurse working overtime") and it can write very insightful parables.
misiek08 3 hours ago||
It summarized the nature of humans today nicely. We are ready to pay any amount nice, but when it gets to subscription mode we are not going to pay even 10x less than the one-time.
furyofantares 7 hours ago||
Nanoclaw is the first hint I've seen of new type of software, user-customizeable code. It's not spec-to-software like in the story, but it is rather interesting. You fork it and then when you add features it self-modifies. I haven't looked deeply, but I'm not sure how you get updates after that, I guess you can probably have it pull and merge itself for a while but if you ever get to where you can't merge anymore, I'm not sure what you do.

As for spec-to-software - I am still pretty unsure about this. Right now of course we are not really that close, it takes too much iteration from a prompt to a usable piece of software, and even then you need to have a good prompt. I'm also not sure about re-generating due to variations on what the result might be. The space of acceptable solutions isn't just one program, it's lots, and if you get a random acceptable solution that might be fine for original generation, but it may be extremely annoying to randomly get a different acceptable solution when regenerating, as you need to re-learn how to use it (thinking about UI specifically here.) Maybe these are the same problem, once you can one-shot the software from a spec maybe you will not have much variation on the solution since you aren't doing a somewhat random walk there iterating on the result.

I also don't know if many users really want to generate their own solutions. That's putting a lot of work on the user to even know what a good idea is. Figuring out what the good ideas are is already a huge part of making software, probably harder than implementing it. Maybe small-(ish) businesses will, like the farmers in the story, but end-users, maybe not, at least not in general.

I do think there is SOMETHING to all this, but it's really hard to predict what it's gonna look like, which is why I appreciate this piece so much.

andreybaskov 5 hours ago||
Reading this was a roller coaster for me.

Because of a bad habit reading comments before the link I knew it was AI. I read it regardless, and... I still enjoyed it!

I'm very much not a writer or a critic, so my definition of good writing is likely very low. Yet I can't shake off this weird feeling that I truly enjoyed the writing and felt the emotions, _while_ knowing it's LLM.

I'm guessing that human after touch is what made it pleasant to read. I'd love to see the commit history of the process. Fun times we live in!

cortesoft 1 day ago||
I do enjoy this sort of speculative fiction that imagines though future consequences of something in its early stages, like AI is right now. There are some interesting ideas in here about where the work will shift.

However, I do wonder if it is a bit too hung up on the current state of the technology, and the current issues we are facing. For example, the idea that the AI coded tools won't be able to handle (or even detect) that upstream data has changed format or methodology. Why wouldn't this be something that AI just learns to deal with? There us nothing inherent in the problem that is impossible for a computer to handle. There is no reason to think AIs can't learn how to code defensively for this sort of thing. Even if it is something that requires active monitoring and remediation, surely even today's AIs could be programmed to monitor for these sorts of changes, and have them modify existing code when to match the change when they occur. In the future, this will likely be even easier.

The same thing is true with the 'orchestration' job. People already have begun to solve this issue, with the idea of a 'supervisor' agent that is designing the overall system, and delegating tasks to the sub-systems. The supervisor agent can create and enforce the contracts between the various sub-systems. There is no reason to think this wont get even better.

We are SO early in this AI journey that I don't think we can yet fully understand what is simply impossible for an AI to ever accomplish and what we just haven't figure out yet.

andai 1 day ago|||
Yeah, in the real world, Tom is already an OpenClaw instance...
Stwerner 1 day ago||
Funny I actually saw this tweet this morning about an Openclaw instance getting too advanced for the users to know how to control and fix: https://x.com/jspeiser/status/2033880731202547784?s=46&t=sAq...
Imustaskforhelp 13 hours ago||
> Funny I actually saw this tweet this morning about an Openclaw instance getting too advanced for the users to know how to control and fix: https://x.com/jspeiser/status/2033880731202547784?s=46&t=sAq...

I feel like this ultimately boils down to something similar to nocode vs code debates that you mention. (Is openclaw having these flowcharts similar to nocode territory?)

at some point, code is more efficient in doing so, maybe even people will then have this code itself be generated by AI but then once again, you are one hallucination away from a security nightmare or doesn't it become openclaw type thing once again

But even after that, at some point, the question ultimately boils down to responsibility. AI can't bear responsibility and there are projects which need responsibility because that way things can be secure.

I think that the conclusion from this is that, we need developers in the loop for the responsibility and checks even if AI generated code stays prevalent and we are seeing some developers already go ahead and call them slop janitors in the sense that they will remove the slop from codebase.

I do believe that the end reason behind it is responsibility. We need someone to be accountable for code and we need someone to take a look and one who understands the things to prevent things from going south in case your project requires security which for almost all production related things/not just basic tinkering is almost necessary.

Stwerner 11 hours ago||
Yeah responsibility and accountability are also some areas I'd like to explore. I'm mostly digging through this artifact I created with Claude to look at first order and second order effects and then "traffic jams" in the "good science fiction doesn't predict the car, it predicts the traffic jam" and what kind of roles might pop up to solve those issues: https://claude.ai/public/artifacts/39e718fa-bc4b-4f45-a3d5-5...

I've mostly been digging through my own version of that and trying to find things I find interesting and seeing what kinds of stories we can build about what a day in that job might look like.

gambiting 1 day ago|||
>>There is no reason to think AIs can't learn how to code defensively for this sort of thing.

For the exact same reason why there is absolutely no technical reason why two departments in a company can't talk to each other and exchange data, but because of <whatever> reason they haven't done that in 20 years.

The idea that farmers will just buy "AI" as a blob that is meant to do a thing and these blobs will never interact with each other because they weren't designed to(as in - John Deere really doesn't want their AI blob to talk to the AI blob made by someone else, even if there is literally no technical reason why it shouldn't be possible), seems like the most likely way things will go - it's how we've been operating for a long time and AI won't change it.

cactusplant7374 14 hours ago|||
> The supervisor agent can create and enforce the contracts between the various sub-systems.

Or you can ask the agent to do this after each round. Or before a deploy. They are great at performing analysis.

cello305 11 hours ago||
[dead]
neilv 10 hours ago||
When I saw this the other day -- and it just went on and on, like a good human author who was going to write this kind of story probably wouldn't -- I looked for a note that it was AI-generated, and I didn't find it.

All I found was a human name given as the author.

We might generously say that the AI was a ghostwriter, or an unattributed collaboration with a ghostwriter, which IIUC is sometimes considered OK within the field of writing. But LLMs carry additional ethical baggage in the minds of writers. I think you won't find a sympathetic ear from professional writers on this.

I understand enthusiasm about tweaking AI, and/or enthusiasm about the commercial potential of that right now. But I'm disappointed to find an AI-generated article pushed on HN under the false pretense of being human-written. Especially an article that requires considerable investment of time even to skim.

mikepurvis 8 hours ago||
I continue to resonate with the Oxide take when I hear this kind of sentiment expressed about AI prose

"... LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.

If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it. If one is lucky, these are LLM hallucinations: obviously wrong and quickly discarded. If one is unlucky, however, it will be a kind of LLM-induced cognitive dissonance: a puzzle in which pieces don’t fit because there is in fact no puzzle at all. This can leave a reader frustrated: why should they spend more time reading prose than the writer spent writing it?"

https://rfd.shared.oxide.computer/rfd/0576#_llms_as_writers

CiscoCodex 9 hours ago||
I sadly agree with this sentiment. But to add my own thoughts, I wonder if our “human generation” (all consciously existing today) are just plainly dinosaurs. Like in three decades we’ll have a society that knew LLMs from birth.

As such, we can’t comprehend the world they live in. A world in which you ask your device to give you any story and it gives you an entire book to read. I’d like to think that as humans we inevitably want whatever is next. So I’d like to think this future generation will learn to not only control, but be beyond more creative than current people can even imagine.

Did people who used typewriters imagine a world with iPhones? Did people flying planes imagine self landing rockets? Did people riding horses imagine electric cars? Did people living in caves imagine ocean crossing ships?

neilv 9 hours ago||
> Did people who used typewriters imagine a world with iPhones? Did people flying planes imagine self landing rockets?

Yes, science fiction writers and readers have, since before any of us were born.

CiscoCodex 7 hours ago||
I kindly can’t tell if you missed my point. As much as past writers and readers could imagine a version of our present, I also imagine that if they got transported here they would still be in awe of what they saw
neilv 5 hours ago||
I agree. I imagine that a writer who predicted modern technology would still be in awe to see smartphone videoconf halfway around the globe finally realized.

And also be surprised by some of the uses to which it's put. And horrified by some of the societal backsliding despite what should be utopian technology.

nirav72 7 hours ago||
Thanks for sharing. This was an amazing read. I’d love to see novels with similar style stories about speculative near future tech and world.
dwd 9 hours ago|
"This was the mechanic’s paradox: the cheaper you were relative to the cost of failure, the more your clients needed you; and the more they needed you, the more they resisted the implication that they’d need you again."

This is my common issue from building websites for SMEs. It's not until Google updates their algorithm - killing their ranking and their sales leads slow that you hear from them.

There is wisdom in constantly up-selling to your customers (we offer management services, SEO and are cautiously moving in AIO), they may say no, but you have a fall back that you offered things that would have mitigated their current crisis.

More comments...