True in the long run. Like a car with a high acceleration but low top speed.
AI makes you start fast, but regret later because you don't have the top speed.
Chill. Interesting times. Learn stuff, like always. Iterate. Be mindful and intentional and don't just chase mirrors but be practical.
The rest is fluff. You know yourself.
* Getting good results from AI forced me to think through and think clearly - up front and even harder.
* AI almost forces me to structure and break down my thoughts into smaller more manageable chunks - which is a good thing. (You can't just throw a giant project at it - it gets really far off from what you want if you do that.)
* I have to make it a habit of reading what code it has added - so I understand it and point to it some improvements or rarely fixes (Claude)
* Everyone has what they think are uninteresting parts of a project that they have to put effort into to see the bigger project succeed - AI really helps with those mundane, cog in the wheel things - it not only speeds things up, personally it gives me more momentum/energy to work on the parts that I think are important.
* It's really bad at reusability - most humans will automatically know oh I have a function I wrote to do this thing in this project which I can use in that project. At some point they will turn that into a library. With AI that amount of context is a problem. I found that filling in for AI for this is just as much work and I best do that myself upfront before feeding it to AI - then I have a hope of getting it to understand the dependency structure and what does what.
* Domain specific knowledge - I deal with Google Cloud a lot and use Gemini for understanding what features exist in some GCP product and how I can use it to solve a problem - works amazingly well to save me time. At the least optioning the solution is a big part of work it makes easier.
* Your Git habits have to be top notch so you can untangle any mess AI creates - you reach a point where you have iterated over a feature addition using AI and it's a mess and you know it went off the rails after some point. If you just made one or two commits now you have to unwind everything and hope the good parts return or try to get AI to deal with it which can be risky.
I am not defending we should drop AI, but we should really measure its effects and take actions accordingly. It's more than just getting more productivity.
However, the challenge has shifted to code review. I now spend the vast majority of my time reading code rather than writing it. You really need to build strong code-reading muscles. My process has become: read, scrap it, rewrite it, read again… and repeat until it’s done. This approach produces good results for me.
The issue is that not everyone has the same discipline to produce well-crafted code when using AI assistance. Many developers are satisfied once the code simply works. Since I review everything manually, I often discover issues that weren’t even mentioned. During reviews, I try to visualize the entire codebase and internalize everything to maintain a comprehensive understanding of the system’s scope.
And you get to pay some big corporation for the privilege.
In the general case, the only way to convince oneself that the code truly works is to reason through it, as testing only tests particular data points for particular properties. Hence, “simply works” is more like “appears to work for the cases I tried out”.
I’m could have used an LLM to assist but then I wouldn’t have learned much.
But I did use an LLM to make a management wrapper to present a menu of options (cli right now) and call the scripts. That probably saved me an hour, easily.
That’s my comfort level for anything even remotely “complicated”.
Did geohot not found one of these?
In any case I don't fully understand what he's trying to say other than negating the hype (which i generally agree with), but not offering any alternative thoughts of his own other than- we have bad tools and programming language. (why? how are they bad? what needs to change for them to be good?)
He's confidently wrong a lot. (Even if I happen to agree with his new, more sober take on AI coding here.)
AI coding shines when this is a good thing. For instance, say you have to adapt the results of one under-documented API into another. Coding agents like Claude Code can write a prototype, get the real-world results of that API, investigate the contents, write code that tries to adapt, test the results, rewrite that code, test again, rewrite again, test again, ad nauseam.
There are absolutely problem domains where this kind of iterative adaptation is slower than bespoke coding, where you already have the abstractions such that every line you write is a business-level decision that builds on years of your experience.
Arguably, Geohot's low-level work on GPU-adjacent acceleration is a "wild west" where his intuition outstrips the value of experimentation. His advice is likely sound for him. If he's looking for a compiler for the highly detailed specifications that pop into his head, AI may not help him.
But for many, many use cases, the analogy is not a compiler; it is a talented junior developer who excels at perseverance, curiosity, commenting, and TDD. They will get stuck at times. They will create things that do not match the specification, and need to be code-reviewed like a hawk. But by and large, if the time-determining factor is not code review but tedious experimentation, they can provide tremendous leverage.
- Autocomplete in Cursor. People think of AI agents first when they talk about AI coding but LLM-powered autocomplete is a huge productivity boost. It merges seamlessly with your existing workflow, prompting is just writings comments, it can edit multiple lines at once or redirect you to the appropriate part of the codebase, and if the output isn’t what you need you don’t waste much time because you can just choose to ignore it and write code as you usually do.
- Generating coding examples from documentation. Hallucination is basically a non-problem with Gemini Pro 2.5 especially if you give it the right context. This gets me up to speed on a new library or framework very quickly. Basically a stack overflow replacement.
- Debugging. Not always guaranteed to work, but when I’m stuck at a problem for too long, it can provide a solution, or give me a fresh new perspective.
- Self contained scripts. It’s ideal for this, like making package installers, cmake configurations, data processing, serverless micro services, etc.
- Understanding and brainstorming new solutions.
- Vibe coding parts of the codebase that don’t need deep integration. E.g. create a web component with X and Y feature, a C++ function that does a well defined purpose, or a simple file browser. I do wonder if a functional programming paradigm would be better when working with LLMs since by avoiding side effects you can work around their weaknesses when it comes to large codebases.
I use LLM to do things like brainstorm, explaining programming concepts and debug. I will not use it to write code. The output is not good enough, and I feel dumber.
I only see the worst of my programming collegues coding with AI. And the results are actual trash. They have no actual understanding of the code "they" are writing, and they have no idea how to actually debug what "they" made, if LLM is not helpful. I can smell the technical debt.
I used to be a bit more open minded on this topic but im increasingly viewing any programmers who use AI for anything other than brainstorming and looking stuff up/explaining it as simply bad at what they do.
You know, aside from AI making it super easy and fast to generate this tech debt in whatever amounts they desire?
AI removes that need. You don't need to know what the function does at all, so your brain devotes no energy towards remembering or understanding it.
The llm prompt has even fewer bits of information specifying the system than code. The model has a lot more bits but still finite. A perfect llm cannot build a perfect app in one shot.
However AIs can research, inquire, and iterate to gain more bits than when you started.
So the comparison to a compiler is not apt because the compiler can’t fix bugs or ask the user for more information about what the program should be.
Most devs are using ai at the autocomplete level which is like this compiler analogy which makes sense in 2025 but that isn’t where we will be in 2030.
What we don’t know is how good the technology will be in the future and how cheap and how fast. But it’s already very different than a compiler.
Specifically, natural language is:
- ambiguous (LLMs solve this to a certain extent)
- extremely verbose
- doesn't lend itself well to refactoring
- the same thing can be expressed in way too many different ways, which leads to instability in specs -> code -> specs -> code -> specs loops (and these are essential to do incremental work)
Having something at our disposal that you can write code specs in, that is as easy as natural language yet, more concise, easy to learn and most of all not so anal/rigid as typical code languages are would be fantastic.Maybe LLMs can be sued to design such a thing ?
nice misspelling (or a joke?), related to all the lawsuits around LLMs.
Joking aside, it’s already there in a sense. Several times I started with a brief outline of what the prototype should do (an HTML/CSS/JS app), and sure enough, refinements and corrections followed. When the final version worked more or less as expected, I asked the LLM to create a specification (a reproducing prompt) of everything we made together. Even if the vibe-coded prototype is dropped, the time wasn’t wasted, I probably would never have come to the same bullet list specification without having an actual working app at my disposal to test and evaluate. So paradoxically this specification even might be used by a human later