Top
Best
New

Posted by mpweiher 12/21/2025

A guide to local coding models(www.aiforswes.com)
602 points | 350 commentspage 6
m3kw9 12/22/2025|
Nobody doing serious coding will use local models when frontier models are that much better, and no they are not half a gen behind frontier. More like 2 gen.
dhruv3006 12/22/2025||
r/locallama has very good discussion for this!
bjt12345 12/22/2025|
/r/localllama is the spelling, I'm forever making this same mistake.
dhruv3006 12/23/2025||
ahaha
bearjaws 12/22/2025||
I am sorry but anyone who actually has tried this knows it is horrifically slow, significantly slower than you just typing for any model worth its weight.

That 128gb of RAM is nice but the time to first token is so long on any context over 32k, and the results are not even close to a Codex or Sonnet.

elestor 12/22/2025||
yeah my 4GB of vram isn't gonna cut it
lucideng 12/22/2025||
A Mac dev type using a 5-year-old machine, I will believe it when I see it. I know a few older Macs still kicking around, but those people use them for basic stuff, not actual work. Mac people jump to new models faster than Taco Bell leaves my body.
artursapek 12/22/2025||
Imagine buying hardware that will be obsolete in 2 years instead of paying Anthropic $200 for $1000+ worth of tokens per month
selcuka 12/22/2025|
> Imagine buying hardware that will be obsolete in 2 years

Unless the PC you buy is more than $4,800 (24 x $200) it is still a good deal. For reference, a MacBook M4 Max with 128GB of unified RAM is $4,699. You need a computer for development anyway, so the extra you pay for inference is more like $2-3K.

Besides, it will still run the same model(s) at the same speed after that period, or even maybe faster with future optimisations in inference.

hu3 12/22/2025||
The value depreciation of the hardware alone is going to be significant. Probably enough to pay for 3x ~$20 subscriptions to OpenAI, Anthropic and Gemini.

Also, if you use the same mac to work, you can't reserve all 128GB for LLMs.

Not to mention a mac will never run SOTA models like Opus 4.5 or Gemini 3.0 which subscriptions gives you.

So unless you're ready to sacrifice quality and speed for privacy, it looks like a suboptimal arrangement to me.

dchftcs 12/22/2025|||
I suspect depreciation will be a bit slower for a while, because there is a supply crunch.
artursapek 12/22/2025|||
Yeah, didn't even mention the fact that you can't Opus on your own hardware. Total waste of cash.
h0rmelchilly 12/22/2025||
[dead]
chrisischris 12/21/2025||
[dead]
hmokiguess 12/22/2025|
This seems really interesting. Reminds of IPFS but for AI
bubbi 12/22/2025|
[dead]