Top
Best
New

Posted by abhaynayar 9/13/2025

AI coding(geohot.github.io)
410 points | 284 commentspage 2
andrewchambers 9/13/2025|
Kind of surprised by this take - I use openpilot often and also use claude code.

I kind of consider them the same thing. Openpilot can drive really well on highways for hours on end when nothing interesting is happening. Claude code can do straight forward refactors, write boilerplate, do scaffolding, do automated git bisects with no input from me.

Neither one is a substitute for the 'driver'. Claude code is like the level 2 self driving of programming.

jrm4 9/13/2025||
Again, can't help but wonder when we might improve this conversation by more strongly defining "what kind of programming."

It's just like "Are robots GOOD or BAD at building things?"

WHAT THINGS?

hereme888 9/13/2025||
I'm a 100% vibe-coder. AI/CS is not my field. I've made plenty of neat apps that are useful to me. Don't ask me how they work; they just do.

Sure the engineering may be abysmal, but it's good enough to work.

It only takes basic english to produce these results, plus complaining to the AI agent that "The GUI is ugly and overcrowded. Make it look better, and dark mode."

Want specs? "include a specs.md"

This isn't a 20% more productive feeling. It's productivity beyond what I will ever do on my own, given this is not my field.

This is all possible because AI was trained on the outstanding work of CS engineers like ya'll.

But the article is highly opinionated. It's like saying only phD's can be called scientists, or only programmers can be computer hackers. But in reality every human is a scientist and a hacker in the real world. The guy in a street corner in India came up with novel ways to make and sell his product, but never wrote a research paper on it. The guy on his fourth marriage noted a statistical correlation in the outcome when meeting women at a bar vs. at a church. The plant that grew in the crevice of a rock noted sunlight absorption was optimal at an angle of 78.3 degrees and grew in that direction.

ozim 9/13/2025||
You made a forest hut and you are calling out people who build skyscrapers - gatekeepers.
hereme888 9/13/2025||
No. I'm just saying: "yes, AI can code."
sitzkrieg 9/13/2025||
no one is disagreeing. you just havent discovered the true costs yet
brulard 9/13/2025||
what is the cost of enabling someone to create little pieces of software if he otherwise wouldn't? I'm no advocate of vibe coding serious commercial software, but to do little DIY stuff - absolutely.
ozim 9/13/2025|||
No one is forbidding people creating little pieces of software.

But creating little pieces of software was already available, for example I make most of my DIY software in a spreadsheet.

There are tons upon tons of low code possibilities or already existing software packages that need a bit of configuration that one can use and using AI or LLM's is not bringing anything that is revolutionizing access to computation or tailoring software. Just get your head around Excel or LibreOffice Calc and most of your DIY software needs are covered.

What LLM's and AI is bringing to the table is illusion of being able to create software exactly like people who have all the theoretical background or experience in building it. (which seems the person I originally replied to be one of those)

So as my original comment the problem is: that people building a forest hut are convinced that only if they just put more sticks on top, they will manage to build a skyscraper.

hereme888 9/14/2025||
I'm so confused. So now you're an elitist gatekeeper to the world of programming? I entered w/o knowledge, and my apps are pretty useful to myself and some others. They're not "little pieces of software". They are fully-working, self-contained, apps with lots of unique code. Some are better than many apps that existed 10 years ago, but you seem to resent that. Also, whatever you do is relatively "high-level" programming on top of the work of those who created hardware and engineered programming languages, compilers and SDKs.
sitzkrieg 9/14/2025|||
its awesome and fun but people throwing shit they dont understand how it 100% works online is a tale as old as people paying attention. which apprentnly isnt that long. fuck off
ac29 9/13/2025|||
> I'm a 100% vibe-coder. AI/CS is not my field. I've made plenty of neat apps that are useful to me.

This describes me pretty well too, though I do have a tiny bit of programming experience. I wrote maybe 5000 lines of code unassisted between 1995-2024. I didn't enjoy it for the most part, nor did I ever feel I was particularly good at it. On the more complex stuff I made, it might take several weeks of effort to produce a couple hundred lines of working code.

Flash forward to 2025 and I co-wrote (with LLMs) a genuinely useful piece of back office code to automate a logistics problem I was previously solving via a manual process in a spreadsheet. It would hardly be difficult for most people here to write this program, its just making some API calls, doing basic arithmetic, and displaying the results in a TUI. But I took a crack at it several times on my own and unfortunately between the API documentation being crap and my own lack of experience, I never got to the point where I could even make a single API call. LLMs got me over that hump and greatly assisted with writing the rest of the codebase, though I did write some of it by hand and worked through some debugging to solve issues in edge cases. Unlike OP, I do think I reasonably well understand what >90% of the code is doing.

> This isn't a 20% more productive feeling. It's productivity beyond what I will ever do on my own, given this is not my field.

So yeah, to the people here saying the above sentiment is BS - its not. For people who have never worked in programming or even in tech, these tools can be immensely useful.

neurostimulant 9/13/2025|||
I think it's like CMS and page builders enabling people to build their own websites without html and server knowledge. They're not making web developers disappear, instead there are more web developers now because those some of those people would eventually outgrow their page builders and need to hire web developers.
platevoltage 9/13/2025||
This is the optimistic view that I see around. I go back and forth. I've had some work that was the result of "vibe coders" hitting a wall with its capabilities.
croes 9/13/2025|||
The crucial part is security.

If the apps runs locally it doesn’t matter, if it‘s connected to the net it could be the seed for the next Mirai bot network.

rhizome31 9/13/2025|||
Apps running locally can also be subject to security issues. What you're trying to say is probably "apps not using untrusted input". If an app takes no input all, I guess we could say that security isn't an issue, but there could still be safety issues.
chatmasta 9/13/2025||||
It’s a pretty good solution for creating live mockups. A designer on my team came back eight hours after a meeting with a fully vibe coded, multi-page interface. I was honestly blown away. I had no idea this was the state of what’s possible with these tools.

Was it a real website? No, but it’s a live mockup way better than any Figma mock or rigid demo-ware.

hereme888 9/13/2025||||
Oh I'd never argue that. Cloud stuff is truly beyond the complexity I'd get involved with at the moment.
VladVladikoff 9/13/2025|||
Also design. All the vibe coded apps always look like absolute ass.
hereme888 9/14/2025||
I could say the same for lots of non-vibe-coded apps. Have you considered that modern vibe-coding can easily produce apps that look better than anything from 20 years ago thanks to all the modern frameworks? Heck, one-shotting a GUI or website will yield better-looking results than anything published by Debian or FreeBSD, lol.
suddenlybananas 9/13/2025|||
What have you actually made?
hereme888 9/13/2025||
What's the intent behind your question?
Sammi 9/13/2025|||
The Internet is awash with people making the same claims you are, but where are the actual results that we can see and use? Where are all these supposed new programs that were only possible to make because of generative ai? The number of new apps in the app store is flat. Still getting the same amount in 2025 as in 2022.
hereme888 9/13/2025||
App-store listing is a whole other animal. I don't care to go through all that just to share my app. I also don't care to resolve every technical issue others experience. Every time I've thought about generating revenue by selling my apps, two thoughts come to mind: my code is not professional-grade, and the field is so competitive than within months a professional will likely create a better app so why pollute the web with something subpar.

The hacker on the street corner isn't distributing his "secret sauce" because it wouldn't meet standards, but it works well for him, and it was cheap/free.

Sammi 9/15/2025||
Sounds like were getting at something in the conversation. For the professional developer generative ai is another tool, making us maybe a bit more capable than before. For the amateur it is a breakthrough enabling amateurs to tackle things that are wildly more ambitious than what they could before. But it's still amateur just-good–enough–for-personal-use software. Productising software is still 90% of the work of professional software development, and this is why we're not seeing a jump in the amount of for-sale software. What numbers should we track for the amount of amateur software? One number I have seen is that the npm downloads of React have jumped a lot in the last year. What this tells us about what additional stuff people are doing with React I don't know.
hereme888 9/15/2025||
I like that take, and confirms I shouldn't yet spend my time trying to sell what I make.
athrowaway3z 9/13/2025|||
Evaluating your empirical experience by judging the complexity you're impressed by.
hereme888 9/13/2025||
Valid inquiry. In relative terms I'm the Indian on a street corner who hacks things together using tools professionally designed by others. Among the repos I've chosen to publicly share: https://github.com/sm18lr88
goku12 9/13/2025||
> Don't ask me how they work; they just do. Sure the engineering may be abysmal, but it's good enough to work.

I've worked on several projects from a few different engineering disciplines. Let me tell you from that experience alone, this is a statement that most of us dread to hear. We had nothing but pain whenever someone said something similar. We live by the code that nothing good is an accident, but is always the result of deliberate care and effort. Be it quality, reliability, user experience, fault tolerance, etc. How can you be deliberate and ensure any of those if you don't understand even the abstractions that you're building? (My first job was this principle applied to the extreme. The mission demanded it. Just documenting and recording designs, tests, versioning, failures, corrections and even meetings and decisions was a career in itself.) Am I wrong about this when it comes to AI? I could be. I concede that I can't keep up with the new trends to assess all of them. It would be foolish to say that I'm always right. But my experience with AI tools hasn't been great so far. It's far easier to delegate the work to a sufficiently mentored junior staff. Perhaps I'm doing something wrong. I don't know. But that statement I said earlier - it's a fundamental guiding principle in our professional lives. I find it hard to just drop it like that.

> But the article is highly opinionated. It's like saying only phD's can be called scientists, or only programmers can be computer hackers.

Almost every single quality professional in my generation - especially the legends - started those pursuits in their childhood under self-motivation (not as part of school curriculum even). You learn these things by pushing your boundary a little bit every day. You are a novice one day. You are the master on another. Are you absolutely pathetic at dancing? Try ten minutes a day. See what happens in ten years. Meanwhile, kids don't even care about others' opinion while learning. Nobody is gatekeeping you on account of your qualifications.

What they're challenging are the assumptions that vibe/AI coders seem to hold, but don't agree with their intuition. They are old fashioned developers. But their intuitions are honed over decades and they tend be surprisingly accurate for reputed developers like Geohotz. (There are numerous hyped up engineering projects out there that made me regret ignoring my own intuition!) It's even more valid if they can articulate their intuition into reasons. This is a very formal activity, even if they express them as blog posts. Geohotz clearly articulates why he thinks that AI copilots are nothing more than glorified compilers with a very leaky specification language. It means that you need to be very careful with your prompts, on top of tracking the interfaces, abstractions and interactions that the AI currently doesn't do at all for you. Perhaps it works for you at the scale you're trying. But lessons like the Therac-25 horror story [1] always remind us how bad things can go wrong. I just don't want to put that extra effort and waste my time reviewing AI generated code. I want to review code from a person whom I can ask for clarifications and provide critiques and feedback that they can follow later.

[1] https://thedailywtf.com/articles/the-therac-25-incident

dmh2000 9/13/2025||
I'm 72, a dev for 40 years. I've lost a step or two. It's harder to buckle down and focus, but using AI tools have enabled me to keep building stuff. I can spec a project, have an agent build it then make sure it works. I just code for fun anyway.
iKlsR 9/13/2025|
I love using AI to prototype that something is possible then go and build it myself while borrowing bits from that initial MVP. The other night I wanted to build a browser extension that could intercept requests from a tab, claude got me something working in about 10 minutes with a couple prompts and a local storage session and then I toyed with the UI a bit to see what was possible.

Now after a weekend morning I have something much slimmer, predictable and sophisticated running... my extension shows a list of repeated responses and I can toggle which one to send to a localhost api that has a simple job queue to update a sqlite db with each new entry, extract the important parts and send it to my lm studio gpt oss 20b endpoint for some analysis and finally and send me a summary on telegram.

I know what I want in my head but cutting down the experimenting or PoC step down to minutes vs hours is pretty useful and as a competent enough dev it's elevated what I can get done now so I can take on more work than I would have by myself previously.

viraptor 9/13/2025||
There's a lot of complaining about current compilers / languages / codebase in similar posts, but barely any ideas for how to make them better. It doesn't seem surprising that people go for the easier problem (make the current process simpler with LLMs) than for the harder one (change the whole programming landscape to something new and actually make it better).
Eikon 9/13/2025||
Even though I don’t buy that LLMs are going to replace developers and quite agree with what is said, this is more of a critique of LLMs as English-to-code translators. LLMs are very useful for many other things.

Researching concepts, for one, has become so much easier, especially for things where you don’t know anything yet and would have a hard time to even formulate a search engine query.

ChrisMarshallNY 9/13/2025||
I’ve found that ChatGPT and Perplexity are great tools for finding “that article I skimmed a year ago that talked about…”.
fleebee 9/13/2025||
I agree. I think a better analogy than a compiler is a search engine that has an excellent grasp of semantics but is also drunk and schizophrenic.

LLMs are really valuable for finding information that you aren't able to formulate a proper search query for.

To get the most out of them, ask them to point you to reliable sources instead of explaining directly. Even then, it pays to be very critical of where they're leading you to. To make an LLM the primary window through which you seek new information is extremely precarious epistemologically. Personally, I'd use it as a last resort.

mindwok 9/13/2025||
These articles are beyond the point of exhausting. Guys, just go use the tools and see if you like them and feel more capable with them. If you do, great, if you don’t, then stop.
energy123 9/13/2025|
The truth will emerge naturally through labor market competition in the long-run. You do you. I will be using these tools extensively. Good luck out there in the arena.
ur-whale 9/13/2025||
I do agree with many points in the article, but not about the last part, namely that coding with AI assist makes you slower.

Personal experience (data points count = 1), as a somewhat seasoned dev (>30yrs of coding), it makes me WAY faster. I confess to not read the code produced at each iteration other than skimming through it for obvious architectural code smell, but I do read the final version line by line and make a few changes until I'm happy.

Long story short: things that would take me a week to put together now take a couple of hours. The vast bulk of the time saved is not having to identify the libraries I need, and not to have to rummage through API documentation.

skydhash 9/13/2025|
> Personal experience (data points count = 1), as a somewhat seasoned dev (>30yrs of coding), it makes me WAY faster.

> Long story short: things that would take me a week to put together now take a couple of hours. The vast bulk of the time saved is not having to identify the libraries I need, and not to have to rummage through API documentation.

One of these is not true.

With libraries, it's either you HAVE to use it, so you spend time being acquainted with it (usually a couple hours to make sense of its design, the rest will come on a needed basis) or you are evaluating multiple ones (and that task is much quicker).

ur-whale 9/13/2025||
> you are evaluating multiple ones (and that task is much quicker).

Of course the latter. And of course I ask the AI to help me select a libray/module/project/whatever that provides what I need. And I ask the AI to classify them by popularity/robustness. And then I apply whatever little/much I know about the space to refine the choice.

may go as far as looking at examples that use the API. And maybe rummage through the code being the API to see if I like what I see.

The whole thing is altogether still way faster than having to pick what I need by hand with my rather limited data ingestion capabilities.

And then, once I've picked one, connecting to the API's is a no-brainer with an LLM, goes super fast.

Altogether major time saved.

sltr 9/13/2025||
What an aggressive tone.

By the author's implied definition of compiler, a human is also a compiler. (Coffee in, code out, so the saying goes.)

But code is distinct from design, and unlike compilers, humans are synthesizers of design. LLMs let you spend more time as system architect instead of code monkey.

jasonjmcghee 9/13/2025|
I wouldn't over index on his tone here- that's the only tone I've seen him use.
dwaltrip 9/14/2025||
Doesn’t make it any better.
runningmike 9/13/2025|
Great short read. But this “ It’s why the world wasted $10B+ on self driving car companies that obviously made no sense.”

Not everything should make sense. Playing , trying and failing is crucial to make our world nicer. Not overthinking is key, see later what works and why.

stevex 9/13/2025||
"that obviously made no sense" is bizarre.

Waymo's driving people around with an injuries-per-mile rate that's lower than having humans do it. I don't see how that reconciles with "obviously made no sense".

CamperBob2 9/13/2025||
"He hates 'em because he ain't 'em" is essentially the whole theme of this rather-disappointing post.

I think he's having a bad day. He's smarter than this.

skydhash 9/13/2025||
> Playing , trying and failing is crucial to make our world nicer. Not overthinking is key, see later what works and why.

It would be, if there weren't actual important works that needs funding.

More comments...