Posted by Stwerner 14 hours ago
Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project… think the fiction equivalent of all the markdown files we use for agentic development now. After that, this was about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms. Happy to answer any questions about the process too if that would be interesting to anybody.
Does that make the OP an "authoring mechanic"? Or an "AI editor"?
Douglas Adams had it right, the problem is not that the answer was useless, it was that people didn't know what the right question was.
As for spec-to-software - I am still pretty unsure about this. Right now of course we are not really that close, it takes too much iteration from a prompt to a usable piece of software, and even then you need to have a good prompt. I'm also not sure about re-generating due to variations on what the result might be. The space of acceptable solutions isn't just one program, it's lots, and if you get a random acceptable solution that might be fine for original generation, but it may be extremely annoying to randomly get a different acceptable solution when regenerating, as you need to re-learn how to use it (thinking about UI specifically here.) Maybe these are the same problem, once you can one-shot the software from a spec maybe you will not have much variation on the solution since you aren't doing a somewhat random walk there iterating on the result.
I also don't know if many users really want to generate their own solutions. That's putting a lot of work on the user to even know what a good idea is. Figuring out what the good ideas are is already a huge part of making software, probably harder than implementing it. Maybe small-(ish) businesses will, like the farmers in the story, but end-users, maybe not, at least not in general.
I do think there is SOMETHING to all this, but it's really hard to predict what it's gonna look like, which is why I appreciate this piece so much.
Because of a bad habit reading comments before the link I knew it was AI. I read it regardless, and... I still enjoyed it!
I'm very much not a writer or a critic, so my definition of good writing is likely very low. Yet I can't shake off this weird feeling that I truly enjoyed the writing and felt the emotions, _while_ knowing it's LLM.
I'm guessing that human after touch is what made it pleasant to read. I'd love to see the commit history of the process. Fun times we live in!
However, I do wonder if it is a bit too hung up on the current state of the technology, and the current issues we are facing. For example, the idea that the AI coded tools won't be able to handle (or even detect) that upstream data has changed format or methodology. Why wouldn't this be something that AI just learns to deal with? There us nothing inherent in the problem that is impossible for a computer to handle. There is no reason to think AIs can't learn how to code defensively for this sort of thing. Even if it is something that requires active monitoring and remediation, surely even today's AIs could be programmed to monitor for these sorts of changes, and have them modify existing code when to match the change when they occur. In the future, this will likely be even easier.
The same thing is true with the 'orchestration' job. People already have begun to solve this issue, with the idea of a 'supervisor' agent that is designing the overall system, and delegating tasks to the sub-systems. The supervisor agent can create and enforce the contracts between the various sub-systems. There is no reason to think this wont get even better.
We are SO early in this AI journey that I don't think we can yet fully understand what is simply impossible for an AI to ever accomplish and what we just haven't figure out yet.
I feel like this ultimately boils down to something similar to nocode vs code debates that you mention. (Is openclaw having these flowcharts similar to nocode territory?)
at some point, code is more efficient in doing so, maybe even people will then have this code itself be generated by AI but then once again, you are one hallucination away from a security nightmare or doesn't it become openclaw type thing once again
But even after that, at some point, the question ultimately boils down to responsibility. AI can't bear responsibility and there are projects which need responsibility because that way things can be secure.
I think that the conclusion from this is that, we need developers in the loop for the responsibility and checks even if AI generated code stays prevalent and we are seeing some developers already go ahead and call them slop janitors in the sense that they will remove the slop from codebase.
I do believe that the end reason behind it is responsibility. We need someone to be accountable for code and we need someone to take a look and one who understands the things to prevent things from going south in case your project requires security which for almost all production related things/not just basic tinkering is almost necessary.
I've mostly been digging through my own version of that and trying to find things I find interesting and seeing what kinds of stories we can build about what a day in that job might look like.
For the exact same reason why there is absolutely no technical reason why two departments in a company can't talk to each other and exchange data, but because of <whatever> reason they haven't done that in 20 years.
The idea that farmers will just buy "AI" as a blob that is meant to do a thing and these blobs will never interact with each other because they weren't designed to(as in - John Deere really doesn't want their AI blob to talk to the AI blob made by someone else, even if there is literally no technical reason why it shouldn't be possible), seems like the most likely way things will go - it's how we've been operating for a long time and AI won't change it.
Or you can ask the agent to do this after each round. Or before a deploy. They are great at performing analysis.
All I found was a human name given as the author.
We might generously say that the AI was a ghostwriter, or an unattributed collaboration with a ghostwriter, which IIUC is sometimes considered OK within the field of writing. But LLMs carry additional ethical baggage in the minds of writers. I think you won't find a sympathetic ear from professional writers on this.
I understand enthusiasm about tweaking AI, and/or enthusiasm about the commercial potential of that right now. But I'm disappointed to find an AI-generated article pushed on HN under the false pretense of being human-written. Especially an article that requires considerable investment of time even to skim.
"... LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it. If one is lucky, these are LLM hallucinations: obviously wrong and quickly discarded. If one is unlucky, however, it will be a kind of LLM-induced cognitive dissonance: a puzzle in which pieces don’t fit because there is in fact no puzzle at all. This can leave a reader frustrated: why should they spend more time reading prose than the writer spent writing it?"
As such, we can’t comprehend the world they live in. A world in which you ask your device to give you any story and it gives you an entire book to read. I’d like to think that as humans we inevitably want whatever is next. So I’d like to think this future generation will learn to not only control, but be beyond more creative than current people can even imagine.
Did people who used typewriters imagine a world with iPhones? Did people flying planes imagine self landing rockets? Did people riding horses imagine electric cars? Did people living in caves imagine ocean crossing ships?
Yes, science fiction writers and readers have, since before any of us were born.
And also be surprised by some of the uses to which it's put. And horrified by some of the societal backsliding despite what should be utopian technology.
This is my common issue from building websites for SMEs. It's not until Google updates their algorithm - killing their ranking and their sales leads slow that you hear from them.
There is wisdom in constantly up-selling to your customers (we offer management services, SEO and are cautiously moving in AIO), they may say no, but you have a fall back that you offered things that would have mitigated their current crisis.