I kind of consider them the same thing. Openpilot can drive really well on highways for hours on end when nothing interesting is happening. Claude code can do straight forward refactors, write boilerplate, do scaffolding, do automated git bisects with no input from me.
Neither one is a substitute for the 'driver'. Claude code is like the level 2 self driving of programming.
It's just like "Are robots GOOD or BAD at building things?"
WHAT THINGS?
Sure the engineering may be abysmal, but it's good enough to work.
It only takes basic english to produce these results, plus complaining to the AI agent that "The GUI is ugly and overcrowded. Make it look better, and dark mode."
Want specs? "include a specs.md"
This isn't a 20% more productive feeling. It's productivity beyond what I will ever do on my own, given this is not my field.
This is all possible because AI was trained on the outstanding work of CS engineers like ya'll.
But the article is highly opinionated. It's like saying only phD's can be called scientists, or only programmers can be computer hackers. But in reality every human is a scientist and a hacker in the real world. The guy in a street corner in India came up with novel ways to make and sell his product, but never wrote a research paper on it. The guy on his fourth marriage noted a statistical correlation in the outcome when meeting women at a bar vs. at a church. The plant that grew in the crevice of a rock noted sunlight absorption was optimal at an angle of 78.3 degrees and grew in that direction.
But creating little pieces of software was already available, for example I make most of my DIY software in a spreadsheet.
There are tons upon tons of low code possibilities or already existing software packages that need a bit of configuration that one can use and using AI or LLM's is not bringing anything that is revolutionizing access to computation or tailoring software. Just get your head around Excel or LibreOffice Calc and most of your DIY software needs are covered.
What LLM's and AI is bringing to the table is illusion of being able to create software exactly like people who have all the theoretical background or experience in building it. (which seems the person I originally replied to be one of those)
So as my original comment the problem is: that people building a forest hut are convinced that only if they just put more sticks on top, they will manage to build a skyscraper.
This describes me pretty well too, though I do have a tiny bit of programming experience. I wrote maybe 5000 lines of code unassisted between 1995-2024. I didn't enjoy it for the most part, nor did I ever feel I was particularly good at it. On the more complex stuff I made, it might take several weeks of effort to produce a couple hundred lines of working code.
Flash forward to 2025 and I co-wrote (with LLMs) a genuinely useful piece of back office code to automate a logistics problem I was previously solving via a manual process in a spreadsheet. It would hardly be difficult for most people here to write this program, its just making some API calls, doing basic arithmetic, and displaying the results in a TUI. But I took a crack at it several times on my own and unfortunately between the API documentation being crap and my own lack of experience, I never got to the point where I could even make a single API call. LLMs got me over that hump and greatly assisted with writing the rest of the codebase, though I did write some of it by hand and worked through some debugging to solve issues in edge cases. Unlike OP, I do think I reasonably well understand what >90% of the code is doing.
> This isn't a 20% more productive feeling. It's productivity beyond what I will ever do on my own, given this is not my field.
So yeah, to the people here saying the above sentiment is BS - its not. For people who have never worked in programming or even in tech, these tools can be immensely useful.
If the apps runs locally it doesn’t matter, if it‘s connected to the net it could be the seed for the next Mirai bot network.
Was it a real website? No, but it’s a live mockup way better than any Figma mock or rigid demo-ware.
The hacker on the street corner isn't distributing his "secret sauce" because it wouldn't meet standards, but it works well for him, and it was cheap/free.
I've worked on several projects from a few different engineering disciplines. Let me tell you from that experience alone, this is a statement that most of us dread to hear. We had nothing but pain whenever someone said something similar. We live by the code that nothing good is an accident, but is always the result of deliberate care and effort. Be it quality, reliability, user experience, fault tolerance, etc. How can you be deliberate and ensure any of those if you don't understand even the abstractions that you're building? (My first job was this principle applied to the extreme. The mission demanded it. Just documenting and recording designs, tests, versioning, failures, corrections and even meetings and decisions was a career in itself.) Am I wrong about this when it comes to AI? I could be. I concede that I can't keep up with the new trends to assess all of them. It would be foolish to say that I'm always right. But my experience with AI tools hasn't been great so far. It's far easier to delegate the work to a sufficiently mentored junior staff. Perhaps I'm doing something wrong. I don't know. But that statement I said earlier - it's a fundamental guiding principle in our professional lives. I find it hard to just drop it like that.
> But the article is highly opinionated. It's like saying only phD's can be called scientists, or only programmers can be computer hackers.
Almost every single quality professional in my generation - especially the legends - started those pursuits in their childhood under self-motivation (not as part of school curriculum even). You learn these things by pushing your boundary a little bit every day. You are a novice one day. You are the master on another. Are you absolutely pathetic at dancing? Try ten minutes a day. See what happens in ten years. Meanwhile, kids don't even care about others' opinion while learning. Nobody is gatekeeping you on account of your qualifications.
What they're challenging are the assumptions that vibe/AI coders seem to hold, but don't agree with their intuition. They are old fashioned developers. But their intuitions are honed over decades and they tend be surprisingly accurate for reputed developers like Geohotz. (There are numerous hyped up engineering projects out there that made me regret ignoring my own intuition!) It's even more valid if they can articulate their intuition into reasons. This is a very formal activity, even if they express them as blog posts. Geohotz clearly articulates why he thinks that AI copilots are nothing more than glorified compilers with a very leaky specification language. It means that you need to be very careful with your prompts, on top of tracking the interfaces, abstractions and interactions that the AI currently doesn't do at all for you. Perhaps it works for you at the scale you're trying. But lessons like the Therac-25 horror story [1] always remind us how bad things can go wrong. I just don't want to put that extra effort and waste my time reviewing AI generated code. I want to review code from a person whom I can ask for clarifications and provide critiques and feedback that they can follow later.
Now after a weekend morning I have something much slimmer, predictable and sophisticated running... my extension shows a list of repeated responses and I can toggle which one to send to a localhost api that has a simple job queue to update a sqlite db with each new entry, extract the important parts and send it to my lm studio gpt oss 20b endpoint for some analysis and finally and send me a summary on telegram.
I know what I want in my head but cutting down the experimenting or PoC step down to minutes vs hours is pretty useful and as a competent enough dev it's elevated what I can get done now so I can take on more work than I would have by myself previously.
Researching concepts, for one, has become so much easier, especially for things where you don’t know anything yet and would have a hard time to even formulate a search engine query.
LLMs are really valuable for finding information that you aren't able to formulate a proper search query for.
To get the most out of them, ask them to point you to reliable sources instead of explaining directly. Even then, it pays to be very critical of where they're leading you to. To make an LLM the primary window through which you seek new information is extremely precarious epistemologically. Personally, I'd use it as a last resort.
Personal experience (data points count = 1), as a somewhat seasoned dev (>30yrs of coding), it makes me WAY faster. I confess to not read the code produced at each iteration other than skimming through it for obvious architectural code smell, but I do read the final version line by line and make a few changes until I'm happy.
Long story short: things that would take me a week to put together now take a couple of hours. The vast bulk of the time saved is not having to identify the libraries I need, and not to have to rummage through API documentation.
> Long story short: things that would take me a week to put together now take a couple of hours. The vast bulk of the time saved is not having to identify the libraries I need, and not to have to rummage through API documentation.
One of these is not true.
With libraries, it's either you HAVE to use it, so you spend time being acquainted with it (usually a couple hours to make sense of its design, the rest will come on a needed basis) or you are evaluating multiple ones (and that task is much quicker).
Of course the latter. And of course I ask the AI to help me select a libray/module/project/whatever that provides what I need. And I ask the AI to classify them by popularity/robustness. And then I apply whatever little/much I know about the space to refine the choice.
may go as far as looking at examples that use the API. And maybe rummage through the code being the API to see if I like what I see.
The whole thing is altogether still way faster than having to pick what I need by hand with my rather limited data ingestion capabilities.
And then, once I've picked one, connecting to the API's is a no-brainer with an LLM, goes super fast.
Altogether major time saved.
By the author's implied definition of compiler, a human is also a compiler. (Coffee in, code out, so the saying goes.)
But code is distinct from design, and unlike compilers, humans are synthesizers of design. LLMs let you spend more time as system architect instead of code monkey.
Not everything should make sense. Playing , trying and failing is crucial to make our world nicer. Not overthinking is key, see later what works and why.
Waymo's driving people around with an injuries-per-mile rate that's lower than having humans do it. I don't see how that reconciles with "obviously made no sense".
I think he's having a bad day. He's smarter than this.
It would be, if there weren't actual important works that needs funding.