Top
Best
New

Posted by bigwheels 1/26/2026

A few random notes from Claude coding quite a bit last few weeks(twitter.com)
https://xcancel.com/karpathy/status/2015883857489522876
911 points | 847 commentspage 8
Madmallard 1/26/2026|
Are game developers vibe coding with agents?

It's such a visual and experiential thing that writing true success criteria it can iterate on seems like borderline impossible ahead of time.

20260126032624 1/27/2026||
I don't "vibe code" but when I use an LLM with a game I usually branch out into several experiments which I don't have to commit to. Thus, it just makes that iteration process go faster.

Or slower, when the LLM doesn't understand what I want, which is a bigger issue when you spawn experiments from scratch (and have given limited context around what you are about to do).

TheGRS 1/27/2026|||
I'm trying it out with Godot for my little side projects. It can handle writing the GUI files for nodes and settings. The workflow is asking cursor to change something, I review the code changes, then load up the game in Godot to check out the changes. Works pretty well. I'm curious if any Unity or Unreal devs are using it since I'm sure its a similar experience.
dysoco 1/28/2026|||
It might be biased to Reddit/Twitter users but from what I've seen game developers seem to be much more averse towards using AI (even for coding) than other fields.

Which is curious since prototyping helps a lot in gamedev.

redox99 1/26/2026|||
Vibe coding in Unreal Engine is of limited use. It obviously helps with C++, but so much of your time is doing things that are not C++. It hurts a lot that UE relies heavily on blueprints, if they were code you could just vibecode a lot of that.
ex-aws-dude 1/28/2026||
A big problem is that a lot of game logic is done in visual scripting (e.g unreal blueprints) which AI tools have no idea about
TrackerFF 1/28/2026||
Minor nitpick: The original measure of a 10x programmer was not the productivity multiplier max/mean, but rather max/min.
nadis 1/26/2026||
The section on IDEs/agent swarms/fallibility resonated a lot for me; I haven't gone quite as far as Karpathy in terms of power usage of Claude Code, but some of the shifts in mistakes (and reality vs. hype) analysis he shared seems spot on in my (caveat: more limited) experience.

> "IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits."

hollowturtle 1/27/2026||
> Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December

Anyone wondering what exactly is he actually building? What? Where?

> The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do.

I would LOVE to have jsut syntax errors produced by LLMs, "subtle conceptual errors that a slightly sloppy, hasty junior dev might do." are neither subtle nor slightly sloppy, they actually are serious and harmful, and no junior devs have no experience to fix those.

> They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?"

Why just not hand write 100 loc with the help of an LLM for tests, documentation and some autocomplete instead of making it write 1000 loc and then clean it up? Also very difficult to do, 1000 lines is a lot.

> Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day.

It's a computer program running in the cloud, what exactly did he expected?

> Speedups. It's not clear how to measure the "speedup" of LLM assistance.

See above

> 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion.

mmm not sure, if you don't have domain knowledge you could have an initial stubb at the problem, what when you need to iterate over it? You don't if you don't have domain knowledge on your own

> Fun. I didn't anticipate that with agents programming feels more fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part.

No it's not fun, eg LLMs produce uninteresting uis, mostly bloated with react/html

> Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually.

My bet is that sooner or later he will get back to coding by hand for periods of time to avoid that, like many others, the damage overreliance on these tools bring is serious.

> Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.

No programming it's not "syntactic details" the practice of programming it's everything but "syntactic details", one should learn how to program not the language X or Y

> What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows a lot.

Yet no measurable econimic effects so far

> Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro).

Did people with a smartphone outperformed photographers?

TaupeRanger 1/27/2026||
Lots of very scared, angry developers in these comment sections recently...
hollowturtle 1/27/2026|||
Not angry nor scared, I value my hard skills a lot, I'm just wondering why people believe religiously everything AI related. Maybe I'm a bit sick with the excessive hype
jofla_net 1/28/2026||
FOMO really
crystal_revenge 1/28/2026||||
There's no fear (a bit of anger I must admit). I suspect nearly all of the reaction against this comes from a similar place to where mine does:

All of the real world code I have had to review created by AI is buggy slop (often with subtle, but weird bugs that don't show up for a while). But on HN I'm told "this is because your co-workers don't know how to AI right!!!!" Then when someone who supposedly must be an expert in getting things done with AI posts, it's always big claims with hand-wavy explanations/evidence.

Then the comments section is littered with no effort comments like this.

Yet oddly whenever anyone asks "show me the thing you built?" Either it looks like every other half-working vibe coded CRUD app... or it doesn't exist/can't be shown.

If you tell me you have discovered a miracle tool, just some me the results. Not taking increasingly ridiculous claims at face value is not "fear". What I don't understand is where comments like yours come from? What makes you need this to be more than it is?

hollowturtle 1/27/2026||||
Also note that I'm a heavy LLM user, not anti ai for sure
Banditoz 1/28/2026||||
This is extremely reductive and incredibly dismissive of everything they wrote above.
crystal_revenge 1/28/2026||
It's because they don't have a substantive response to it, so they resort to ad hominems.

I've worked extensively in the AI space, and believe that it is extremely useful, but these weird claims (even from people I respect a lot) that "something big and mysterious is happening, I just can't show you yet!" set of my alarms.

When sensible questions are met with ad hominems by supporters it further sets of alarm bells.

thr59182617 1/27/2026|||
I see way more hype that is boosted by the moderators. The scared ones are the nepo babies who founded a vaporware AI company that will be bought by daddy or friends through a VC.

They have to maintain the hype until a somewhat credible exit appears and therefore lash out with boomer memes, FOMO, and the usual insane talking points like "there are builders and coders".

razodactyl 1/29/2026|||
society doesn't take kindly to the hyper-aware. tone it down.
simianwords 1/27/2026|||
i'm not sure what kind of conspiracy you are hallucinating. do you think people have to "maintain the hype"? it is doing quite well organically.
hollowturtle 1/27/2026||
So well that they're losing billions and OpenAI may go bankrupt this year
simianwords 1/27/2026||
what if it doesn't?
hollowturtle 1/27/2026||
better for them! the heck i care about it
simianwords 1/27/2026||
This is a low quality curmudgeonly comment
hollowturtle 1/27/2026|||
Now that you contributed zero net to the discussion and learned a new word you can go out and play with toys! Good job
potatogun 1/27/2026|||
You learned a new adjective? If people move beyond "nice", "mean" and "curmudgeonly" they might even read Shakespeare instead of having an LLM producing a summary.
simianwords 1/27/2026||
cool.

>Anyone wondering what exactly is he actually building? What? Where?

this is trivially answerable. it seems like they did not do even the slightest bit of research before asking question after question to seem smart and detailed.

hollowturtle 1/27/2026||
I asked many question and you focused on only one, btw yes I did my research, and I know him because I followed almost every tutorial he has on YouTube, and he never mentions clearly what weekend project worked on to make him conclude with such claims. I had a very high respect of him if not that at some point started acting like the Jesus Christ of LLMs
simianwords 1/27/2026||
its not clear why you asked that question if you knew the answer to it?
DeathArrow 1/27/2026||
>LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

Quite insightful.

appstorelottery 1/27/2026||
> Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually.

I've been increasingly using LLM's to code for nearly two years now - and I can definitely notice my brain atrophy. It bothers me. Actually over the last few weeks I've been looking at a major update to a product in production & considered doing the edits manually - at least typing the code from the LLM & also being much more granular with my instructions (i.e. focus on one function at a time). I feel in some ways like my brain is turning into slop & I've been coding for at least 35 years... I feel validated by Karpathy.

epolanski 1/27/2026|
Don't be too worried about it.

1. Manual coding may be less relevant (albeit ability to read code, interpret it and understand it will be more) in the future. Likely already is.

2. Any skill you don't practice becomes "weaker". Gonna give you an example. I play chess since my childhood, but sometimes I go months without playing it, even years. When I get back I start losing elo fast. If I was in the top 10% of chess.com, I drop to top 30% in the weeks after. But after few months I'm back at top 10%. Takeaway: your relative ability is more or less the same compared to other practitioners, you're simply rusty.

appstorelottery 1/27/2026||
Thanks for your comment, it set me at ease. I know from experience that you're right on point 2. As for point one, I also tend to agree. AI is such a paradigm shift & rapid/massive change doesn't come without stress. I just need to stay cool about it all ;-)
dzonga 1/28/2026||
maybe its just me doing stuff that's out the usual loop

even dealing with api's that have MCP servers the so called agents make a mess of everything.

my stuff is just regular data stuff - ingest data from x - transform it | make it real time - then pipe it to y

dubeye 1/28/2026||
the xcancel link is amusing.

9/10 of the most important social media users use X, like or loath it

gregorygoc 1/28/2026|
Stop whining
dubeye 4 days ago||
not 100% clear who this is directed at ;)
lingrush4 1/28/2026||
No idea why the poster wants to deprive this author of engagement on his post, but here's the original link: https://x.com/karpathy/status/2015883857489522876
ares623 1/28/2026|
Imagine taking career advice from people who will never need to be employed again in order to survive.
fragmede 1/28/2026|
Yes, typically you take since from people who've been successful at their career. Are you suggesting we should be taking career advice from high school freshmen instead?
ares623 1/28/2026||
I'm nitpicking on the atrophy bit. He can afford to have his skills or his brain atrophied. His followers though?

Nevermind the fact he became successful _because_ of his skills and his brain.

More comments...