Top
Best
New

Posted by brendanmc6 9 hours ago

Specsmaxxing – On overcoming AI psychosis, and why I write specs in YAML(acai.sh)
194 points | 215 commentspage 5
wasabinator 7 hours ago|
More nonsense buzzword soup de jour. Can I play along at home? How about Vibewatermaxxing? Surely in the new age it should catch on.

This industry is just getting more and more bonkers.

ltbarcly3 7 hours ago||
Grindmaxxing, a long form blog post that is actually just an advertisement for his website.
brendanmc6 7 hours ago||
Should I apologize for being excited about something I built and use daily and for wanting people to try it, discuss it, critique it? Not sure by the tone of your message.
hansmayer 7 hours ago|||
Read the room. What you "built" is neither exciting, nor something most people want to "try". Why? Because just like other AI boosters, you are still trying to somehow optimise the usage of natural language to make it work. But it will never "work" because the way the stochastic ML system is built, it has a failure built into the system.
brendanmc6 6 hours ago||
Totally agree it's not exciting, even though I am personally excited by it, and I also agree it's not something most people want to try, even though some people do want to try it-- and I found a few of them right here on HN.

Disagree on the bit about it "never going to work" though.

Failure-prone stochastic ML systems produce testable, auditable code... just like failure-prone human brains can produce testable, auditable code. And in fact, in both cases, changes to our process can reduce the amount of failures that slip past testing and audit. Or can reap other rewards. Finding the a better process is what I'm interested in right now.

hansmayer 4 hours ago||
> Failure-prone stochastic ML systems produce testable, auditable code...

You're missing the bigger picture here. Yeah, they produce code. But "producing" code was never the bottleneck. Yes you can pop out a webapp within a couple of hours, but now you have no clue how it works, even if its a language and framework you are competent it in, because you skipped the part where you understand how the parts fit in together architecturally. So you wrote an elaborate spec, but the LLM "decides" to do something else. Maybe they don't make that PK autoincrement or they throw you in those nice empty "catch" blocks they ingested from various beginner tutorials, which will be very "helpful" when you application silently deviates from the happy path execution that you spec'ed the hell out of in your virulent spec-driven-workflow.. So it "kinda" works, it generates the code. It works the way your kid's toy car works - it "drives" but it cannot be driven to work, can it? So it does not work in the big picture. It's not a reliable enterprise ready system. It's a toy, and should be treated like one.

vessenes 5 hours ago||||
Don't apologize. Keep writing and trying things. Ignore the haters and non-curious, listen to the (even if salty) interested.

There's a fair amount of talk right now about the value being in the verification layer -- once there's a hard verification loop, the agents can do amazing things without getting (permanently) sidetracked. I think what you're working on is half way there -- in essence, you're probably relying on the LLMs notion of what a spec is and should be to the codebase.

What's not currently solved, and what I think is very interesting is how much automation can be added to the creation of verification. We all would unlock a lot more speed and productivity for even moderate gains on that side.

brendanmc6 2 hours ago||
Totally agree. This is 100% where I want to be focusing my energy next! A lot to learn and explore still, just need to find the time.
wiseowise 7 hours ago|||
No need to apologize, just don’t act surprised when people call you out.
adi_kurian 6 hours ago||
A tried and true content marketing strategy. The 100+ upvotes suggest he's doing something right.
hsaliak 4 hours ago||
the problem is not the forward pass, its the control/feedback loop when slop is written in response to the forward pass. Perhaps we should give the LLM 2 specs, one designed for the forward pass and another for the acceptance criteria /backward pass that's focused on tests, best practices and code, so that the output is independently verified?
DeathArrow 3 hours ago||
When developing large or complex software with AI, I think we need kind of a "Jira for coding agents" - something that is much lighter and simple than Jira, where agents can see the specs, see what is completed, what is the relationship between different features and what needs to change.

That would be easier to use than gazillons of .md files and skills.

wiseowise 7 hours ago||
What is it with people and procrastinating with the most useless shit you can imagine?

First it was choice of editor: people were micro optimizing every aspect of their typing experience, editor wars where people would literally slaughter over suggesting another camp.

Editor wars v2: IDEs arrived and second editor war began.

Revenge of the note taking apps: Obsidian/Roam/Joplin/Apple Notes/Logseq. Just one plugin, just one more knowledge graph, bro, and I’ll have peak productivity. 10x is almost here.

AI: you’re witnessing it now.

Do people NOT have anything else in life? How are y’all finding time to do all of this shit? Are you doing it on company time? Do you have hobbies, do you learn foreign languages, travel, have kids or spouses, drive a car, other thousand “normie” things outside of staring at the freaking monitor or thinking about this shit 24/7? Did I miss the invention of a Time Machine?

hansmayer 7 hours ago||
A lot of people sadly, nowadays don't have anything resembling to a social life and family... So here we are now, specmaxxing and shit :)
adi_kurian 6 hours ago|||
Lmfao. Going to a site for computer geeks and complaining that they are computer geeks.

Also, a lot of folks don't write code anymore, and barely have the time to read the volume of code that AI produces. This may just be one of the most profound changes in an industry, and some folks are excited about it and want to get better at building with it.

I think the person who wrote this post made a good faith effort to share his learnings while promoting his tool.

WesolyKubeczek 6 hours ago|||
It's fun how people brag of their agentmaxxing, but if you ask them what those agents are busy actually producing, it's invariably another agent harness so they can agentmaxx better. NFT/blockchain ecosystem was much the same.
geoffbp 7 hours ago|||
I think people find joy in trying to optimise (maxxxxxx) their setup be it editor AI note taking etc. They make time for it
logicchains 7 hours ago||
>Do people NOT have anything else in life? How are y’all finding time to do all of this shit? Are you doing it on company time? Do you have hobbies, do you learn foreign languages, travel, have kids or spouses, drive a car, other thousand “normie” things outside of staring at the freaking monitor or thinking about this shit 24/7? Did I miss the invention of a Time Machine?

How are any of those things even remotely as interesting as arguing with people about an Emacs config?

WesolyKubeczek 6 hours ago||
If you have ever been to car forums, it's quite the same there.

People are people.

pineaux 6 hours ago||
Anything [prefix]maxxing just sounds so bad. It just feels so Andrew Tate...
csomar 5 hours ago||

    Dear Claude,
    I hope this email finds you well.\
    I am writing to ask if you could please do another task for me.\
    Start by running \`npx @acai.sh/cli skill\`.\
    This will teach you everything you need to know about our process for spec-driven development. Then, proceed to plan and implement the features specified in our spec files.

    Love,\
    \[your-name]
Honestly, I can no longer tell parody from reality. Whether in politics or AI.
_the_inflator 5 hours ago|
The author is right but his message ain’t specsmaxing, because while somewhat understandable as a rationale what does it actually mean?

In other words: specs can be as detailed as it gets, and this is why developers have a hard time when they face as a senior an NDAed regulated environment. It ain’t software craftsmanship but data flow, hardware components, compliance on the lowest level including supply chains often times, information architecture - a simple app needs to comply to specs that amount to thousands of pages.

Context window: circular reference. A year ago? Specsmaxing by really weeding out any redundant words. Today? Yawn, like with 8mb RAM vs 512 Gigabytes.

AI wants to be easy on us so what is a spec anyway then?

To put it this way: the spec for the spec is constantly evolving.

Last year’s prompts lead to extremely different results today no matter how maxed out.

The author was on point with his introduction: AI is as junior in many ways when it comes to any sort of efficiency and optimization.

This is my revaluation after years of experimenting with AI. Beautiful code, sophisticated but performance wise and its architecture are laughable at best.

AI is not trained on optimization. Not the slightest and juniors have no clue about algorithms and Big O.

In fact Google used Big O as a basic entry level interview question for a very long time. They have to but the simple fact that in my experience 99% of devs never heard or consider it speaks volumes.

AI cannot compensate for that (yet).

I went the opposite and my specs focus heavily on architecture and the obvious dumb performance drains noobs do.

Google was mocked about Big O. And yes, failing to understand that Big O can be neglected thankfully in 99% of cases is part of its logic.

AI bloats your code. And a year long single dev project gets pumped out in hours. In short: a homerun for Big O because it looks on results that change depending on the variables. A function in mathematical terms.

So I think the author did a funny and great job of you focus on Big O if needed. Everything else is not that important because of being open to change and extension.

Big numbers need great architecture.

It screams loudly. And also think about leaks. Before AI I had virtually no memory leaks at all. Since AI NodeJS and React are worse leaking compared to IE 6 and 8. I mean it.

Big O reduces them significantly, so don’t work around the Elephant in the room.

Architecture and optimization is brutally hard. Google blew my mind in this regard but this is another story of squeezing out even milliseconds out of a build tool used by all. A single dev laughs at it but failed the calculation as well as abstraction.

More comments...