Top
Best
New

Posted by signa11 17 hours ago

A sufficiently detailed spec is code(haskellforall.com)
543 points | 299 commentspage 7
scuff3d 13 hours ago|
I recently left this comment on another thread. At the time I was focused on planning mode, but it applies here.

Plan mode is a trap. It makes you feel like you're actually engineering a solution. Like you're making measured choices about implementation details. You're not, your just vibe coding with extra steps. I come from an electrical engineering background originally, and I've worked in aerospace most of my career. Most software devs don't know what planning is. The mechanical, electrical, and aerospace engineering teams plan for literal years. Countless reviews and re-reviews, trade studies, down selects, requirement derivations, MBSE diagrams, and God knows what else before anything that will end up in the final product is built. It's meticulous, detailed, time consuming work, and bloody expensive.

That's the world software engineering has been trying to leave behind for at least two decades, and now with LLMs people think they can move back to it with a weekend of "planning", answering a handful of questions, and a task list.

Even if LLMs could actually execute on a spec to the degree people claim (they can't), it would take as long to properly define as it would to just write it with AI assistance in the first place.

Seattle3503 11 hours ago|
I think of "plan" mode as a read-only mode where the LLM isn't chomping at the bit to start writing to files. Rather than being excitable and over-active, it is receptive and listening.
jongjong 8 hours ago||
This is relatable.

I did a side project with a non-technical co-founder a year ago and every time he told me what he wanted, I made a list of like 9 or 10 logical contradictions in his requirements and I had to walk him through what he said with drawings of the UI so that he would understand. Some stuff he wanted me to do sounded good in his head but once you walk through the implementation details, the solution is extremely confusing for the user or it's downright physically impossible to do based on cost or computational resource constraints.

Sure, most people who launched a successful product basically stumbled onto the perfect idea by chance on the first attempt... But what about the 99% others who fell flat on their face! You are the 99% and so if you want to succeed by actual merit, instead of becoming a statistic, you have to think about all this stuff ahead of time. You have to simulate the product and business in detail in your mind and ask yourself honestly; is this realistic? Before you even draw your first wireframe. If you find anything wrong with it, anything wrong at all; it means the idea sucks.

It's like; this feature is too computationally and/or financially expensive to offer for free and not useful enough to warrant demanding payment from users... You shouldn't even waste your time with implementation; it's not going to work! The fundamental economics of the software which exists in your imagination aren't going to magically resolve themselves after implementing in reality.

Translating an idea to reality never resolves any known problems; it only adds more problems!

The fact is that most non-technical people only have a very vague idea of what they want. They operate in a kind of wishy washy, hand-wavy emotion-centric environment and they think they know what they're doing but they often don't.

serallak 4 hours ago|
He wanted seven perpendicular lines ?
macinjosh 14 hours ago||
IMHO, LLMs are better at Python and SQL than Haskell because Python and SQL syntax mirrors more aspects of human language. Whereas Haskell syntax reads more like a math equation. These are Large _Language_ Models so naturally intelligence learned from non-code sources transfers better to more human like programming languages. Math equations assume the reader has context not included in the written down part for what the symbols mean.
dwb 7 hours ago||
I have found recent models to be quite respectable at Haskell, given a couple of initial nudges on style - but that’s true of anything.
WanderPanda 13 hours ago|||
They are heavily post-trained on code and math these days. I don‘t think we can infer that much about their behavior from just the pre-training dataset anymore
ozozozd 13 hours ago|||
They are not called Context-Sensitive Large Language Models though.

LLMs are very good at bash, which I’d argue doesn’t read like language or math.

catlifeonmars 14 hours ago||
I suspect your probably right, but just for completeness, one could also make the argument that LLMs are better at writing Haskell because they are overfit to natural language and Haskell would avoid a lot of the overfit spaces and thus would generalize better. In other words, less baggage.
travisgriggs 13 hours ago||
I would guess they’re better at python and SQL than Haskell because the available training data for python and SQL is orders of magnitude more than Haskell.
nsnzjznzbx 9 hours ago||
Not really. Since a lot of code are tradeoffs and decisions made by programmers but the business spec is met either way.
qcautomation 4 hours ago||
[dead]
justboy1987 6 hours ago||
[dead]
openclaw01 9 hours ago||
[dead]
minnzen 4 hours ago||
[dead]
seedpi 11 hours ago||
[dead]
ruhith 12 hours ago|
[dead]
More comments...