Top
Best
New

Posted by bigwheels 1/26/2026

A few random notes from Claude coding quite a bit last few weeks(twitter.com)
https://xcancel.com/karpathy/status/2015883857489522876
911 points | 847 commentspage 9
trivo 7 days ago|
I sometimes wonder about the similarities between this paradigm switch (coding -> vibe coding) and when the industry switched from writing assembler to using high-level languages. I both cases we switched from having to specify every posibble implementation detail to focusing more on higher level concepts and letting the machine work out the rest. Maybe in the future instead of sharing source code, we will share prompts that we used to create a program. Similarly how different compilers produce different assembly now, "compiling" prompts with different agent/model would give different results. Maybe in the future an analog for "optimizing compiler" would emerge for agents, which would turn the (working) slop into something more clean.
felineflock 1/28/2026||
xcancel? What is the purpose or benefit of providing a free mirror to x? Doesn't it end up sparing the x servers and causing their costs to decrease?
moss_dog 1/28/2026||
I prefer xcancel in part because Twitter doesn't let you view replies etc when not logged in.
yojat661 1/28/2026|||
Guessing x loses ad revenue when traffic goes to xcancel.
tryauuum 1/28/2026||
my screen is 60 percent banners about cookies and account creation when I use x
upghost 1/28/2026||
tl;dr - All this AI stuff is just Universal Paperclips[1]

I see a lot of comments about folks being worried about going soft, getting brain rot, or losing the fun part of coding.

As far as I'm concerned this is a bigger (albeit kinda flakey) self-driving tractor. Yeah I'd be bored if I just stuck to my one little cabbage patch I'd been tilling by hand. But my new cabbage patch is now a megafarm. Subjectively, same level of effort.

[1]: https://en.wikipedia.org/wiki/Universal_Paperclips

rileymichael 1/27/2026||
> LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building

as the former, i've never felt _more ahead_ than now due to all of the latter succumbing to the llm hype

svara 1/28/2026||
Basically mirrors my experience.

Interestingly, when you point out this ...

> IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side.

... here on HN [0] you get a bunch of people telling you to get with the times, grandpa.

Really makes me wonder: Who are these people and why are they doing that?

[0] https://news.ycombinator.com/item?id=46745039

sota_pop 1/28/2026||
> Slopacolypse Really… REALLY not looking forward to getting this word spammed at me the next 6-12 months… even less so seeing the actual manifestation.

> TLDR This should be at the start?

I actually have been thinking of trying out ClaudeCode/OpenCode over this past week… can anyone provide experience, tips, tricks, ref docs?

My normal workflow is using Free-tier ChatGPT to help me interrogate or plan my solution/ approach or to understand some docs/syntax/best practice of which I’m not familiar. then doing the implementation myself.

gverrilla 1/28/2026|
Claude code official docs are quite nice - that's where I started.
neuralkoi 1/27/2026||
> The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking.

If current LLMs are ever deployed in systems harboring the big red button, they WILL most definitely somehow press that button.

arthurcolle 1/27/2026||
US MIC are already planning on integrating fucking Grok into military systems. No comment.
Havoc 1/28/2026|||
Including classified systems. What could possibly go wrong
blibble 1/28/2026|||
the US is going to stop the chinese by mass production of illegal pornography?
groby_b 1/27/2026||
fwiw, the same is true for humans. Which is why there's a whole lot of process and red tape around that button. We know how to manage risk. We can choose to do that for LLM usage, too.

If instead we believe in fantasies of a single all-knowing machine god that is 100% correct at all times, then... we really just have ourselves to blame. Might as well just have spammed that button by hand.

solarized 1/28/2026||
Next milestone: solving authoritarian LLM dependencies. We can’t always get trapped in local minima. Or is that actually okay?
superze 1/27/2026||
I don't know about you guys but most of the time it's spitting nonsense models in sqlalchemy and I have to constantly correct it to the point where I am back at writing the code myself. The bugs are just astonishing and I lose control of the codebase after some time to the point where reviewing the whole thing just takes a lot of time.

On the contrary if it was for a job in a public sector I would just let the LLM spit out some output and play stupid, since salary is very low.

jbjbjbjb 1/28/2026|
> do generalists outperform specialists?

Depends what we mean by specialist. If it frontend vs backend then maybe. If it general dev vs some specialist scientific programmer or other field where a generalist won’t have a clue then this seems like a recipe for disaster (literal disasters included).

More comments...