Top
Best
New

Posted by abhaynayar 5 hours ago

AI Coding(geohot.github.io)
228 points | 163 comments
bdcravens 3 hours ago|
I'm almost 50, and have been writing code professionally since the late 90s. I can pretty much see projects in my head, and know exactly what to build. I also get paid pretty well for what I do. You'd think I'd be the prototype for anti-AI.

I'm not.

I can build anything, but often struggle with getting bogged down with all the basic work. I love AI for speed running through all the boring stuff and getting to the good parts.

I liken AI development to a developer somewhere between junior and mid-level, someone I can given a paragraph or two of thought out instructions and have them bang out an hour of work. (The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern, but a separate problem to solve)

onion2k 3 hours ago||
I love AI for speed running through all the boring stuff and getting to the good parts.

In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.

Companies need to realise that AI to go faster is great, but there's still a cognitive impact on the people. A little respite from the hardcore stuff is genuinely useful sometimes. Taking all of that away will be bad for people.

That said, some devs hate the boring easy bits and will thrive. As with everything, individuals need to be managed as individuals.

FeepingCreature 3 hours ago|||
That makes me think of https://store.steampowered.com/app/2262930/Bombe/ which is a version of Minesweeper where instead of clicking on squares you define (parametric!) rules that propagate information around the board automatically. Your own rules skip all the easy parts for you. As a result, every challenge you get is by definition a problem that you've never considered before. It's fun, but also exhausting.
sothatsit 2 hours ago|||
I remember listening to a talk about Candy Crush and how they designed the game to have a few easy levels in between the hard ones, to balance feeling like you're improving while also challenging players. If all the levels get progressively harder, then a lot of people lose motivation to keep playing.
Yoric 2 hours ago|||
Oooohhh....

That looks like plenty of hours of fun! Thanks for the link :)

CuriouslyC 14 minutes ago||||
That's crazy to me. I solve problems. I'm not a janitor or tradesman, you bring me in to envision and orchestrate solutions that bring bottom line value. I live to crack hard nuts, if I never have to bother with rigging again I'll be so happy.
Yoric 3 hours ago||||
Interesting point.

There's also the fact that, while you're coding the easy stuff, your mind is thinking about the hard stuff, looking things up, seeing how they articulate. If you're spending 100% of your time on hard stuff, you might be hurting these preliminaries.

brabel 1 hour ago||
This makes no sense. Yes, having time to think about the hard part is good, but just because you’re not doing the boilerplate anymore doesn’t mean you can’t do the thinking part anymore! See how absurd it sounds when you actually describe it this way?
Yoric 1 hour ago||
Let me rephrase.

I know brilliant people who took up knitting to keep their hands busy while they think over their difficult problems. But that only works if you can knit in your work hours. Sadly, despite clearly improving the productivity of these people, this is a fireable offense in many jobs.

I'm not saying that the only way to think through a hard problem is to work on boilerplate. If you are in a workplace where you can knit, or play table soccer, by all means, and if these help you, by all means, go for it.

What I'm thinking out loud is that if we're dedicating 100% of our time to the hard problems, we'll hit a snag, and that boilerplate may (accidentally) serve as the padding that makes sure we're not at these 100%.

That being said, I'm not going to claim this as a certainty, just an idea.

bdcravens 13 minutes ago||
I don't disagree, but I find a better use of my time is writing. Not code, but essentially a work journal. It's not big thoughts, it's bullet points. It's not documentation, but more of a open mind map: what's been done, what needs to be done, questions that inevitably pop up, etc. I use Obsidian for this, but if I write much more than what would go on a few post-it notes, it's too much.
mystifyingpoi 1 hour ago||||
> Devs often want the inherent safety of the boring, easy stuff for a while

That's matches my experience. In my first job, every time a new webapp project has been starting it was fun. Not because of challenges or design, but simply because of the trivial stuff done for n-th time - user accounts, login, password reset, admin panel. Probably should have been automated at this point, but we got away with reinventing the wheel each time.

raincole 3 hours ago||||
> AI changes the job to be a constant struggle with hard problems.

Very true. I think AI (especially Claude Code) forced me to actually think hard about the problem at hand before implementing the solution. And more importantly, write down my thoughts before they fleet away from my feeble mind. A discipline that I wished I had before.

dvfjsdhgfv 2 hours ago||
That's strange, I've never thought it can be done this way. Normally I'd read the docs, maybe sketch up some diagrams, then maybe take a walk while thinking on how to solve the problem, and by the time I got back to the screen I'd already have a good idea on how to do it.

These days the only difference is that I feed my ideas to a few different LLMs to have "different opinions". Usually they're crap but sometimes they present something useful that I can implement.

simianwords 1 hour ago||||
This is exactly why people hate AI. It disrupts the comfort of easy coding.
sublinear 1 hour ago||||
I think you're describing things we already knew long before this era of AI. Less code is better code, and the vast majority of bugs come from the devs who "hate the boring easy bits".

I disagree that this has anything to do with people needing a break. All code eventually has to be reviewed. Regardless of who or what wrote it, writing too much of it is the problem. It's also worth considering how much more code could be eliminated if the business more critically planned what they think they want.

These tensions have existed even before computers and in all professions.

pydry 3 hours ago||||
>AI changes the job to be a constant struggle with hard problems

I find this hilarious. From what I've seen watching people do it, it changes the job from deep thought and figuring out a good design to pulling a lever on a slot machine and hoping something good pops out.

The studies that show diminished critical thinking have matched what i saw anecdotally pairing with people who vibe coded. It replaced deep critical thinking with a kind of faith based gambler's mentality ("maybe if i tell it to think really hard it'll do it right next time...").

The only times ive seen a notable productivity improvement is when it was something not novel that didnt particularly matter if what popped out was shit - e.g. a proof of concept, ad hoc app, something that would naturally either work or fail obviously, etc. The buzz people get from these gamblers' highs when it works seems to make them happier than if they didnt use it at all though.

bdcravens 3 hours ago|||
Which was my original point. Not that the outcome is shit. So much of what we write is absolutely low-skill and low-impact, but necessary and labor-intensive. Most of it is so basic and boilerplate you really can't look at it and know if it was machine- or human-generated. Why shouldn't that work get cranked out in seconds instead of hours? Then we can do the actual work we're paid to do.

To pair this with the comment you're responding to, the decline in critical thinking is probably a sign that there's many who aren't as senior as their paycheck suggests. AI will likely lead to us being able to differentiate between who the architects/artisans are, and who the assembly line workers are. Like I said, that's not a new problem, it's just that AI lays that truth bare. That will have an effect generation over generation, but that's been the story of progress in pretty much every industry for time eternal.

skydhash 2 hours ago||
> So much of what we write is absolutely low-skill and low-impact, but necessary and labor-intensive. Most of it is so basic and boilerplate you really can't look at it and know if it was machine- or human-generated.

Is it really? Or is it a refusal to do actual software engineering, letting the machine taking care of it (deterministically) and moving up the ladder in terms of abstraction. I've seen people describing things as sludge, but they've never learned awk to write a simple script to take care of the work. Or learned how to use their editor, instead using the same pattern they would have with Notepad.

I think it's better to take a step back and reflect on why we're spending time on basic stuff instead. Instead of praying that the LLM will generate some good basic stuff.

bdcravens 23 minutes ago||
If you're not able to review what it generates, you shouldn't be using it (and arguably are the wrong person to be doing the boilerplate work to begin with)

Put differently, I go back to my original comment, where AI is essentially a junior/mid dev that you can express what needs to be done with enough detail. In either case, AI or dev, you'd review and/or verify it.

> Or is it a refusal to do actual software engineering, letting the machine taking care of it (deterministically) and moving up the ladder in terms of abstraction.

One could say the same of installing packages in most modern programming languages instead of writing the code from first principles.

lukaslalinsky 3 hours ago|||
I think there are two kinds of uses for these tools:

1) you try to explain what you want to get done

2) you try to explain what you want to get done and how to get it done

The first one is gambling, the second one has very small failure rate, at worst, the plan it presents shows it's not getting the solution you want it to do.

CuriouslyC 11 minutes ago||
The thing is to understand that a model has "priors" which steer how it generates code. If what you're trying to build matches the priors of the model you can basically surf the gradients to working software with no steering using declarative language. If what you want to build isn't well encoded by the models priors it'll constantly drift, and you need to use shorter prompts and specify the how more (imperative).
bdcravens 3 hours ago|||
> In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.

The issue of senior-juniors has always been a problem; AI simply means they're losing their hiding spots.

jb3689 17 minutes ago|||
100% agree. I am interested in seeing how this will change how I work. I'm finding that I'm now more concerned with how I can keep the AI busy and how I can keep the quality of outputs high. I believe it has a lot to do with how my projects are structured and documented. There are also some menial issues (e.g. structuring projects to avoid merge conflicts becoming bottlenecks)

I expect that in a year my relationship with AI will be more like a TL working mostly at the requirements and task definition layer managing the work of several agents across parallel workstreams. I expect new development toolchains to start reflecting this too with less emphasis on IDEs and more emphasis on efficient task and project management.

I think the "missed growth" of junior devs is overblown though. Did the widespread adoption of higher-level really hurt the careers of developers missing out on the days when we had to do explicit memory management? We're just shifting the skillset and removing the unnecessary overhead. We could argue endlessly about technical depth being important, but in my experience this hasn't ever been truly necessary to succeed in your career. We'll mitigate these issues the same way we do with higher-level languages - by first focusing on the properties and invariants of the solutions outside-in.

timeinput 1 hour ago|||
I have a couple of niche areas of non-coding interest where I'm using AI to code. It is so amazing to write rust and just add `todo!(...)` through out the boiler plate. The AI is miserable at implementing domain knowledge in those niche areas, but now I can focus on describing the domain knowledge (in real rust code because I can't describe it precisely enough in English + pseudo code), and then say "fill in the todos, write some tests make sure it compiles, and passes linting", verify the tests check things properly and I'm done.

I've struggled heavily trying to figure out how to get it to write the exactly correct 10 lines of code that I need for a particularly niche problem, and so I've kind of given up on that, but getting it to write the 100 lines of code around those magic 10 lines saves me so much trouble, and opens me up to so many more projects.

ttiurani 41 minutes ago|||
> I can build anything, but often struggle with getting bogged down with all the basic work. I love AI for speed running through all the boring stuff and getting to the good parts.

I'm in the same boat (granted, 10 years less) but can't really relate with this. By the time any part becomes boring, I start to automate/generalize it, which is very challenging to do well. That leaves me so little boring work that I speed run through it faster by typing it myself than I could prompt it.

The parts in the middle – non-trivial but not big picture – in my experience are the parts where writing the code myself constantly uncovers better ways to improve both the big picture and the automation/generalization. Because of that, there are almost no lines of code that I write that I feel I want to offload. Almost every line of code either improves the future of the software or my skills as a developer.

But perhaps I've been lucky enough to work in the same place for long. If I couldn't bring my code with me and had to constantly start from scratch, I might have a different opinion.

ChrisMarshallNY 1 hour ago|||
I have a similar view of AI.

I find it best as a "personal assistant," that I can use to give me information -sometimes, highly focused- at a moment's notice.

> The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern

I think it's a very real problem. I am watching young folks being frozen out of the industry, at the very beginning of their careers. It is pretty awful.

I suspect that the executives know that AI isn't yet ready to replace senior-levels, but they are confident that it will, soon, so they aren't concerned that there aren't any more seniors being crafted from youngsters.

bob1029 45 minutes ago|||
> I can pretty much see projects in my head, and know exactly what to build.

This is where AI actually helps - you have a very precise vision of what you want, but perhaps you've forgotten about the specific names of certain API methods, etc. Maybe you don't want to implement all the cases by hand. Often validating the output can take just seconds when you know what it is you're looking for.

The other part of making the output do what you want is the ability to write a prompt that captures the most essential constraints of your vision. I've noticed the ability to write and articulate ideas well in natural language terms is the actual bottleneck for most developers. It takes just as much practice communicating your ideas as it does anything else to get good at it.

curl-up 3 hours ago|||
Exactly. I tend to like Hotz, but by his description, every developer is also "a compiler", so it's a useless argument.

My life quality (as a startup cofounder wearing many different hats across the whole stack) would drop significantly if Cursor-like tools [1] were taken away from me, because it takes me a lot of mental effort to push myself to do the boring task, which leads to procrastination, which leads to delays, which leads to frustration. Being able to offload such tasks to AI is incredibly valuable, and since I've been in this space from "day 1", I think I have a very good grasp on what type of task I can trust it to do correctly. Here are some examples:

- Add logging throughout some code

- Turn a set of function calls that have gotten too deep into a nice class with clean interfaces

- Build a Streamlit dashboard that shows some basic stats from some table in the database

- Rewrite this LLM prompt to fix any typos and inconsistencies - yeah, "compiling" English instructions into English code also works great!

- Write all the "create index" lines for this SQL table, so that <insert a bunch of search usecases> perform well.

[1] I'm actually currently back to Copilot Chat, but it doesn't really matter that much.

skydhash 2 hours ago||
> Add logging throughout some code

That's one of the thing that I wouldn't delegate to LLM. Logging is like a report of things that happens. And just like a report, I need relevant information and the most useful information.

...

A lot of these use cases actually describes the what. But the most important questions is always the why. Why is it important to you? Or to the user? That's when things have a purpose and not be just toys.

curl-up 1 hour ago||
Code with logging is "self reporting". Adding logging statements is not reporting itself. Adding `logger.error(f"{job} failed")` is not reporting itself, and LLMs are perfectly capable of adding such statements in applicable places.

As to why, it's because I'm building an app with a growing userbase and need to accommodate to their requirements and build new features to stay ahead of the competition. Why you decided I'm describing a toy project is beyond me.

skydhash 1 hour ago||
As someone else said: Launch is only the first step. That's when practicality start to matter.

The reason senior engineers are being paid that well is not because they need to type a lot of code to get new features in. It's because they need to figure how to have less code while having more features.

3abiton 2 hours ago|||
> I love AI for speed running through all the boring stuff and getting to the good parts.

But the issue is some of that speedrunning sometimes takes so much time, it becomes inefficient. It's slowly improving (gpt5 is incredible), but sometimes it get stuck on really mundane issue, and regress endlessly unless I intervene. And I am talking about straightforwars functional code.

st-keller 1 hour ago|||
That‘s exactly why i like AI too. I even let them play roles like „junior dev“, „product owner“ or „devops engineer“ and orchestrate them, to play together as a team - with guidance from me (usually the „solution architect“ or „investor“)! This „team“ achieves in weeks what we usually needed months for - for 2.40€/h*role!
kaffekaka 19 minutes ago||
I can't tell if you are being sarcastic but this sounds absurd. Why let the AI be junior, why not an expert?

This persona driven workflow is so weird to me. Feels like stuck in old ways.

DrewADesign 2 hours ago|||
Yes, unfortunately the boring parts are what junior devs used to do so the senior devs could work on the good stuff. Now that AI is doing the boring stuff nobody has to hire those pesky jr developers anymore. Yay?

The problem is that junior developers are what we make senior developers with— so in 15 years, this is going to be yet another thing that the US used to be really good at, but is no longer capable of doing, just like many important trades in manufacturing. The manufacturers were all only concerned with their own immediate profit and made the basic sustainability of their workforce, let alone the health of the trades that supported their industries, a problem for everyone else to take care of. Well, everyone else did the same thing.

chewz 2 hours ago|||
> The problem is that junior developers are what we make senior developers with— so in 15 years

In 15 years senior developers will not be needed as well. Anyway no company is obliged to worry about 15 years timescale

m_fayer 2 hours ago|||
It’s yet another place where we know our own capacity as a society is shrinking and hoping that ??? (Ai? Robots? Fusion?) will fix it before it’s too late. I never thought programming would join elder-care in this category though, that came as a surprise.
agentcoops 2 hours ago|||
I have a similar relation to AI with programming -- and my sense is very many HN readers do as well, evidenced not least by the terrific experience report from antirez [1]. Yet it is rare to see such honest and open statements even here. Instead, HN is full of endless anti-AI submissions on the front page where the discussion underneath is just an echo chamber of ill-substantiated attacks on 'AI hype' and where anything else is down-voted.

It's what is, to me, so bizarre about the present moment: certainly investment is exceptionally high in AI (and of course use), but the dominant position in the media is precisely such a strange 'anti-AI hype' that positions itself as a brave minority position. Obviously, OpenAI/Altman have made some unfortunate statements in self-promotion, but otherwise I genuinely can't think of something I've read that expresses the position attacked by the anti-AI-ers -- even talk of 'AGI' etc comes from the AI-critical camp.

In a sense, the world seems divided into three: obvious self-promotion from AI companies that nobody takes seriously, ever-increasingly fervent 'AI critique', and the people who, mostly silent, have found modern AI with all its warts to be an incomparably useful tool across various dimensions of their life and work. I hope the third camp becomes more vocal so that open conversations about the ways people have found AI to be useful or not can be the norm not the exception.

[1] https://antirez.com/news/154

QuadmasterXLII 1 hour ago||
It’s hard to see how the current rate of progress is compatible with, 30 years from now, it being good business sense to pay human professionals six figure salaries. Opinions then split: the easiest option is pure denial, to assume that the current rate of progress doesn’t exist. Next easiest is to assume that progress will halt soon, then that we will be granted the lifestyle of well paid professionals when unsupervised AI can do our job for cheaper, then that Altman will at least deign to feed us.
haute_cuisine 3 hours ago|||
Would love to see a project you built with the help of AI, can you share any links?
bdcravens 3 hours ago|||
Most of my work is for my employer, but the bigger point is that you wouldn't be able to tell my "AI work" from my other work because I primarily use it for the boring stuff that is labor-intensive, while I work on the actual business cases. (Most of my work doesn't fall under the category of "web application", but rather, backend and background-processing intensive work that just happens to have an HTML front-end)
williamcotton 3 hours ago||||
https://github.com/williamcotton/webpipe

Shhh, WIP blog post (on webpipe powered blog)

https://williamcotton.com/articles/introducing-web-pipe

Yes, I wrote my own DSL, complete with BDD testing framework, to write my blog with. In Rust!

  GET /hello/:world
    |> jq: `{ world: .params.world }`
    |> handlebars: `<p>hello, {{world}}</p>`

  describe "hello, world"
    it "calls the route"
      when calling GET /hello/world
      then status is 200
      and output equals `<p>hello, world</p>`
My blog source code written in webpipe:

http://github.com/williamcotton/williamcotton.com

wwweston 3 hours ago|||
What’s the tooling you’re using, and the workflow you find yourself drawn to that boosts productivity?
bdcravens 3 hours ago||
I've used many different ones, and find the result pretty similar. I've used Copilot in VS Code, Chat GPT stand-alone, Warp.dev's baked in tools, etc. Often it's a matter of what kind of work I'm doing, since it's rarely single-mode.
spion 1 hour ago|||
I don't think thats contrary to the article's claim: the current tools are so bad and tedious to use for repetitive work that AI is helpful with a huge amount of it.
somewhereoutth 2 hours ago||
> developer somewhere between junior and mid-level

Why the insistence on anthropomorphizing what is just a tool? It has no agency, does not 'think' in any meaningful manner, it is just pattern matching on a vast corpus of training data. That's not to say it can't be very useful - as you seem to have found - but it is still just a tool.

AlecSchueler 1 hour ago||
It's not anthropomorphising though, is it? It's just a comparison of the tool's ability. Like talking about the horsepower of an engine.
CuriouslyC 18 minutes ago||
This post is such a cold take, and is going to age horribly.

Self driving cars fail because of regulatory requirements for five nines reliability, and they're doing inference over a dynamic noisy domain.

Autonomous engineering does not have these issues. Code doesn't need to be five nines correct, and the domain of inference is logical and basically static.

If the AI agent/coding companies didn't have their heads up their collective asses we could have fully spec driven autonomous coding within ~3 years, 100%.

hereme888 2 hours ago||
I'm a 100% vibe-coder. AI/CS is not my field. I've made plenty of neat apps that are useful to me. Don't ask me how they work; they just do.

Sure the engineering may be abysmal, but it's good enough to work.

It only takes basic english to produce these results, plus complaining to the AI agent that "The GUI is ugly and overcrowded. Make it look better, and dark mode."

Want specs? "include a specs.md"

This isn't a 20% more productive feeling. It's productivity beyond what I will ever do on my own, given this is not my field.

This is all possible because AI was trained on the outstanding work of CS engineers like ya'll.

But the article is highly opinionated. It's like saying only phD's can be called scientists, or only programmers can be computer hackers. But in reality every human is a scientist and a hacker in the real world. The guy in a street corner in India came up with novel ways to make and sell his product, but never wrote a research paper on it. The guy on his fourth marriage noted a statistical correlation in the outcome when meeting women at a bar vs. at a church. The plant that grew in the crevice of a rock noted sunlight absorption was optimal at an angle of 78.3 degrees and grew in that direction.

ozim 1 hour ago||
You made a forest hut and you are calling out people who build skyscrapers - gatekeepers.
hereme888 1 hour ago||
No. I'm just saying: "yes, AI can code."
neurostimulant 54 minutes ago|||
I think it's like CMS and page builders enabling people to build their own websites without html and server knowledge. They're not making web developers disappear, instead there are more web developers now because those some of those people would eventually outgrow their page builders and need to hire web developers.
croes 2 hours ago|||
The crucial part is security.

If the apps runs locally it doesn’t matter, if it‘s connected to the net it could be the seed for the next Mirai bot network.

chatmasta 27 minutes ago|||
It’s a pretty good solution for creating live mockups. A designer on my team came back eight hours after a meeting with a fully vibe coded, multi-page interface. I was honestly blown away. I had no idea this was the state of what’s possible with these tools.

Was it a real website? No, but it’s a live mockup way better than any Figma mock or rigid demo-ware.

rhizome31 1 hour ago||||
Apps running locally can also be subject to security issues. What you're trying to say is probably "apps not using untrusted input". If an app takes no input all, I guess we could say that security isn't an issue, but there could still be safety issues.
hereme888 2 hours ago|||
Oh I'd never argue that. Cloud stuff is truly beyond the complexity I'd get involved with at the moment.
suddenlybananas 2 hours ago||
What have you actually made?
hereme888 2 hours ago||
What's the intent behind your question?
Sammi 1 hour ago|||
The Internet is awash with people making the same claims you are, but where are the actual results that we can see and use? Where are all these supposed new programs that were only possible to make because of generative ai? The number of new apps in the app store is flat. Still getting the same amount in 2025 as in 2022.
hereme888 1 hour ago||
App-store listing is a whole other animal. I don't care to go through all that just to share my app. I also don't care to resolve every technical issue others experience. Every time I've thought about generating revenue by selling my apps, two thoughts come to mind: my code is not professional-grade, and the field is so competitive than within months a professional will likely create a better app so why pollute the web with something subpar.

The hacker on the street corner isn't distributing his "secret sauce" because it wouldn't meet standards, but it works well for him, and it was cheap/free.

athrowaway3z 1 hour ago|||
Evaluating your empirical experience by judging the complexity you're impressed by.
hereme888 1 hour ago||
Valid inquiry. In relative terms I'm the Indian on a street corner who hacks things together using tools professionally designed by others. Among the repos I've chosen to publicly share: https://github.com/sm18lr88
demirbey05 4 hours ago||
I started fully coding with Claude Code. It's not just vibe coding, but rather AI-assisted coding. I've noticed there's a considerable decrease in my understanding of the whole codebase, even though I'm the only one who has been coding this codebase for 2 years. I'm struggling to answer my colleagues' questions.

I am not defending we should drop AI, but we should really measure its effects and take actions accordingly. It's more than just getting more productivity.

krystofee 2 hours ago||
I’m experiencing something similar. We have a codebase of about 150k lines of backend code. On one hand, I feel significantly more productive - perhaps 400% more efficient when it comes to actually writing code. I can iterate on the same feature multiple times, refining it until it’s perfect.

However, the challenge has shifted to code review. I now spend the vast majority of my time reading code rather than writing it. You really need to build strong code-reading muscles. My process has become: read, scrap it, rewrite it, read again… and repeat until it’s done. This approach produces good results for me.

The issue is that not everyone has the same discipline to produce well-crafted code when using AI assistance. Many developers are satisfied once the code simply works. Since I review everything manually, I often discover issues that weren’t even mentioned. During reviews, I try to visualize the entire codebase and internalize everything to maintain a comprehensive understanding of the system’s scope.

dm3 1 hour ago||
I'm very surprised you find this workflow more efficient than just writing the code. I find constructing the mental model of the solution and how it fits into existing system and codebase to be 90% of effort, then actually writing the code is 10%. Admittedly, I don't have to write any boilerplate due to the problem domain and tech choices. Coding agents definitely help with the last 10% and also all the adjacent work - one-off scripts where I don't care about code quality.
apercu 3 hours ago|||
I wrote a couple python scripts this week to help me with a midi integration project (3 devices with different cable types) and for quick debugging if something fails (yes, I know there are tools out there that do this but I like learning).

I’m could have used an LLM to assist but then I wouldn’t have learned much.

But I did use an LLM to make a management wrapper to present a menu of options (cli right now) and call the scripts. That probably saved me an hour, easily.

That’s my comfort level for anything even remotely “complicated”.

ionwake 3 hours ago|||
I keep wanting to go back to using claudecode but I get worried about this issue. How best to use it to complement you, without it rewriting everything behidn the scenes? whats the best protocol? constnat commit requests and reviews?
numbers_guy 4 hours ago||
This is the chief reason I don't use integrations. I just use chat, because I want to physically understand and insert code myself. Else you end up with the code overtaking your understanding of it.
pmg101 3 hours ago||
Yes. I'm happy to have a sometimes-wrong expert to hand. Sometimes it provides just what I need, sometimes like with a human (who are also fallible), it helps to spur my own thinking along, clarify, converge on a solution, think laterally, or other productivity boosting effects.
matt3D 3 hours ago||
This is a more extreme example of the general hacker news group think about AI.

Geohot is easily a 99.999 percentile developer, and yet he can’t seem to reconcile that the other 99.999 percent are doing something much more basic than he can ever comprehend.

It’s some kind of expert paradox, if everyone was as smart and capable as the experts, then they wouldn’t be experts.

I have come across many developers that behave like the AI. Can’t explain codebases they’ve built, can’t maintain consistency.

It’s like a aerospace engineer not believing that the person that designs the toys in an Kinder egg doesn’t know how fluid sims work.

jimmydoe 1 hour ago||
This.

I think his excellency in his own trade limited his vision for the 99% who just want to get by in the job. How many dev even deal with compiler directly these days? They write some code, fix some red underlines, then push, pray and wait for pipeline pass. LLMs will be gods in this process, and you can even beg another one if your current one does not work best.

mihaic 1 hour ago||
> Geohot is easily a 99.999 percentile developer

I keep seeing people praise this guys, but honestly I never saw anything impressive in anything he's ever done. He does seem to be prolific and with a lot of energy, but I've seen plenty of equally talented people.

Sammi 1 hour ago|||
You've seen plenty of people who hacked the ps3 and iphone as teenagers and created a low level system analysis tool for doing such system hacks? You've seen plenty of people writing self driving car software a decade ago? Why did you write this when you know nothing?
mihaic 15 minutes ago||
I actually have seen plenty of people that could have done something like this, but did not because they simply never tried. Being daring by itself is a skill, but we're talking raw technical ability here.

I've actually seen another developer that was probably in the same category write his own self-driving software. It kind of worked, but couldn't have ever been production ready, so it was just an exercise in flexing without any practical application.

So, what product that George built do you actually use?

mritchie712 1 hour ago|||
For the comment above, the more relevant denominator is all humans vs. all developers. If you use all humans as the denominator, he's easily in the top 1% or 0.001% (I haven't followed his work closely, but you'd only have to be a good dev to be in top 1% of the global population).
mihaic 14 minutes ago||
Thank you, perhaps I worded it harshly, but that was my general feeling. Being a good developer already is a high level. Being able to start impressive-sounding projects that never materialize into anything is a luxury for which most competent developers simply don't have the extra energy.
amirhirsch 31 minutes ago||
That METR study gets a lot of traction for its headline; and I doubt many people read the whole thing—it was long—but the data showed a 50% speed up for the one dev with the most experience with Cursor/AI, suggesting a learning curve and also wild statistical variation on a small sample set. An errata later suggested another dev who did not have a speedup had not represented their experience correctly, but still strongly draws into question the significance of the findings.

The specific time sucks measured in the study are addressable with improved technology like faster LLMs and improved methodology like running parallel agents—the study was done in March running Claude 3.7 and before Claude Code.

We also should value the perception of having worked 20% less even if you actually spent more time. Time flies when you’re having fun!

dsiegel2275 1 hour ago||
So I have all kinds of problems with this post.

First, the assertion that the best model of "AI coding" is that it is a compiler. Compilers deterministically map a formal language to another under a spec. LLM coding tools are search-based program synthesizers that retrieve, generate, and iteratively edit code under constraints (tests/types/linters/CI). That’s why they can fix issues end-to-end on real repos (e.g., SWE-bench Verified), something a compiler doesn’t do. Benchmarks now show top agents/models resolving large fractions of real GitHub issues, which is evidence of synthesis + tool use, not compilation.

Second, that the "programming language is English". Serious workflows aren’t "just English." They use repo context, unit tests, typed APIs, JSON/function-calling schemas, diffs, and editor tools. The "prompt" is often code + tests + spec, with English as glue. The author attacks the weakest interface, not how people actually ship with these tools.

Third, non-determinism isn't disqualifying. Plenty of effective engineering tools are stochastic (fuzzers, search/optimization, SAT/SMT with heuristics). Determinism comes from external specs: unit/integration tests, type systems, property-based tests, CI gates.

False dichotomy: "LLMs are popular only because languages/libraries are bad." Languages are improving (e.g. Rust, Typescript), yet LLMs still help because the real bottlenecks are API lookup, cross-repo reading, boilerplate, migrations, test writing, and refactors, the areas where retrieval and synthesis shine. These are complementary forces, not substitutes.

Finally, no constructive alternatives are offered. "Build better compilers/languages" is fine but modern teams already get value by pairing those with AI: spec-first prompts, test-gated edits, typed SDK scaffolds, auto-generated tests, CI-verified refactors, and repo-aware agents.

A much better way to think about AI coding and LLMs is that they aren’t compilers. They’re probabilistic code synthesizers guided by your constraints (types, tests, CI). Treat them like a junior pair-programmer wired into your repo, search, and toolchain. But not like a magical English compiler.

intothemild 58 minutes ago||
It's not surprising that you're finding problems with the article. It's written by George Hotz aka Geohot.
mccoyb 1 hour ago||
Excellent response, completely agree.
zkmon 3 hours ago||
Ofcourse, there is some truth in what you say. But business is desperate for new tech where they can redefine the order (who is big and who is small). There are floating billions which chase short term returns. Fund managers will be fired if they are not jumping on the new fad in the town. CIO's and CEO's will be fired if they are not jumping on AI. It's just nuclear arms race. It's good for none. but the other guy is on it, so you need to be too.

Think about this. Before there were cars on roads, people were just as much happy. Cars came, cities were redesigned for cars with buildings miles apart, and commuting miles became the new norm. You can no longer say cars are useless because the context around them has changed to make the cars a basic need.

AI does same thing. It changes the context in which we work. Everyone expects you use AI (and cars). It becomes a basic need, though a forced one.

To go further, hardly anything produced by science or technology is a basic need for humans. The context got twisted, making them basic needs. Tech solutions create the problems which they claim to solve. The problem did not exist before the solution came around. That's core driving force of business.

giveita 3 hours ago||
I have a boring opinion. A cold take? served straight from the freezer.

He is right, however AI is still darn useful. He hints at why: patterns.

Writing a test suite for a new class when an existing one is in place is a breeze. It even can come up with tests you wouldnt have thought of or would have been too time pressed to check.

It also applies to non-test code too. If you have the structure it can knock a new one out.

You could have some lisp contraption that DRYs all the WETs so there is zero boilerplate. But in reality we are not crafting these perfect cosebases, we make readable, low-magic and boilerplatey code on tbe whole in our jobs.

skydhash 1 hour ago|
But what about the tests usefulness? Tests enforce contracts, contracts are about the domain, not the implementation. The number of tests don't actually matter as much as what is being actually verified. If you look at the code to know what to tests, you are doing it wrong.
giveita 1 hour ago||
The usefulness is in saving time boilerplating, plus figuring out tests I may not have thought of.

But I do closely review the code! It turns the usual drudge of writing tests into more of a code review. Last time I did it it had some mistakes I needed to fix for sure.

skydhash 1 hour ago||
There shouldn't be boilerplate in test code. It should be refactored into harnesses, utils, and fixtures instead.
vmg12 3 hours ago|
I think this gets to a fundamental problem with the way the AI labs have been selling and hyping AI. People keep on saying that the AI is actually thinking and it's not just pattern matching. Well, as someone that uses AI tools and develops AI tools, my tools are much more useful when I treat the AI as a pattern matching next-token predictor than an actual intelligence. If I accidentally slip too many details into the context, all of a sudden the AI fails to generalize. That sounds like pattern matching and next token prediction to me.

> This isn’t to say “AI” technology won’t lead to some extremely good tools. But I argue this comes from increased amounts of search and optimization and patterns to crib from, not from any magic “the AI is doing the coding”

* I can tell claude code to crank out some basic crud api and it will crank it out in a minute saving me an hour or so.

* I need an implementation of an algorithm that has been coded a million times on github, I ask the AI to do it and it cranks out a correct working implementation.

If I only use the AI in its wheelhouse it works very well, otherwise it sucks.

KoolKat23 3 hours ago||
I think this comes down to levels of intelligence. Not knowledge, I mean intelligence. We often underestimate the amount of thinking/reasoning that goes into a certain task. Sometimes the AI can surprise you and do something very thoughtful, this often feels like magic.
athrowaway3z 1 hour ago||
Both CRUD and boilerplate are arguably a tooling issue. But there are also a bunch of things only AI will let you do.

My tests with full trace level logging enabled can get very verbose. It takes serious time for a human to parse where in the 100 lines of text the relevant part is.

Just telling an AI: "Run the tests and identify the root cause" works well enough, that nowadays it is always my first step.

More comments...