Top
Best
New

Posted by pretext 12 hours ago

GLM-4.7: Advancing the Coding Capability(z.ai)
302 points | 142 comments
jtrn 10 hours ago|
My quickie: MoE model heavily optimized for coding agents, complex reasoning, and tool use. 358B/32B active. vLLM/SGLang only supported on the main branch of these engines, not the stable releases. Supports tool calling in OpenAI-style format. Multilingual English/Chinese primary. Context window: 200k. Claims Claude 3.5 Sonnet/GPT-5 level performance. 716GB in FP16, probably ca 220GB for Q4_K_M.

My most important takeaway is that, in theory, I could get a "relatively" cheap Mac Studio and run this locally, and get usable coding assistance without being dependent on any of the large LLM providers. Maybe utilizing Kimik2 in addition. I like that open-weight models are nipping at the feet of the proprietary models.

hasperdi 7 hours ago||
I bought a second‑hand Mac Studio Ultra M1 with 128 GB of RAM, intending to run an LLM locally for coding. Unfortunately, it's just way too slow.

For instance, an 4‑bit quantized model of GLM 4.6 runs very slowly on my Mac. It's not only about tokens per second speed but also input processing, tokenization, and prompt loading; it takes so much time that it's testing my patience. People often mention about the TPS numbers, but they neglect to mention the input loading times.

jwitthuhn 4 hours ago|||
At 4 bits that model won't fit into 128GB so you're spilling over into swap which kills performance. I've gotten great results out of glm-4.5-air which is 4.5 distilled down to 110B params which can fit nicely at 8 bits or maybe 6 if you want a little more ram left over.
mechagodzilla 7 hours ago||||
I've been running the 'frontier' open-weight LLMs (mainly deepseek r1/v3) at home, and I find that they're best for asynchronous interactions. Give it a prompt and come back in 30-45 minutes to read the response. I've been running on a dual-socket 36-core Xeon with 768GB of RAM and it typically gets 1-2 tokens/sec. Great for research questions or coding prompts, not great for text auto-complete while programming.
tyre 6 hours ago||
Given the cost of the system, how long would it take to be less expensive than, for example, a $200/mo Claude Max subscription with Opus running?
mechagodzilla 6 hours ago|||
It's not really an apples-to-apples comparison - I enjoy playing around with LLMs, running different models, etc, and I place a relatively high premium on privacy. The computer itself was $2k about two years ago (and my employer reimbursed me for it), and 99% of my usage is for research questions which have relatively high output per input token. Using one for a coding assistant seems like it can run through a very high number of tokens with relatively few of them actually being used for anything. If I wanted a real-time coding assistant, I would probably be using something that fit in the 24GB of VRAM and would have very different cost/performance tradeoffs.
Workaccount2 4 hours ago|||
Never, local models are for hobby and (extreme) privacy concerns.

A less paranoid and much more economically efficient approach would be to just lease a server and run the models on that.

g947o 1 hour ago||
This.

I spent quite some time on r/LocalLLaMA and yet need to see a convincing "success story" of productively using local models to replace GPT/Claude etc.

hedgehog 6 hours ago||||
Have you tried Qwen3 Next 80B? It may run a lot faster, though I don't know how well it does coding tasks.
Reubend 7 hours ago||||
Anything except a 3bit quant of GLM 4.6 will exceed those 128 GB of RAM you mentioned, so of course it's slow for you. If you want good speeds, you'll at least need to store the entire thing in memory.
nimchimpsky 1 hour ago|||
[dead]
embedding-shape 10 hours ago|||
> Supports tool calling in OpenAI-style format

So Harmony? Or something older? Since Z.ai also claim the thinking mode does tool calling and reasoning interwoven, would make sense it was straight up OpenAI's Harmony.

> in theory, I could get a "relatively" cheap Mac Studio and run this locally

In practice, it'll be incredible slow and you'll quickly regret spending that much money on it instead of just using paid APIs until proper hardware gets cheaper / models get smaller.

biddit 10 hours ago|||
> In practice, it'll be incredible slow and you'll quickly regret spending that much money on it instead of just using paid APIs until proper hardware gets cheaper / models get smaller.

Yes, as someone who spent several thousand $ on a multi-GPU setup, the only reason to run local codegen inference right now is privacy or deep integration with the model itself.

It’s decidedly more cost efficient to use frontier model APIs. Frontier models trained to work with their tightly-coupled harnesses are worlds ahead of quantized models with generic harnesses.

theLiminator 9 hours ago||
Yeah, I think without a setup that costs 10k+ you can't even get remotely close in performance to something like claude code with opus 4.5.
cmrdporcupine 9 hours ago||
10k wouldn't even get you 1/4 of the way there. You couldn't even run this or DeepSeek 3.2 etc for that.

Esp with RAM prices now spiking.

coder543 9 hours ago||
$10k gets you a Mac Studio with 512GB of RAM, which definitely can run GLM-4.7 with normal, production-grade levels of quantization (in contrast to the extreme quantization that some people talk about).

The point in this thread is that it would likely be too slow due to prompt processing. (M5 Ultra might fix this with the GPU's new neural accelerators.)

embedding-shape 8 hours ago|||
> $10k gets you a Mac Studio with 512GB of RAM, which definitely can run GLM-4.7 with normal, production-grade levels of quantization (in contrast to the extreme quantization that some people talk about).

Please do give that a try and report back the prefill and decode speed. Unfortunately, I think again that what I wrote earlier will apply:

> In practice, it'll be incredible slow and you'll quickly regret spending that much money on it

I'd rather place that 10K on a RTX Pro 6000 if I was choosing between them.

rynn 7 hours ago|||
> Please do give that a try and report back the prefill and decode speed.

M4 Max here w/ 128GB RAM. Can confirm this is the bottleneck.

https://pastebin.com/2wJvWDEH

I weighed about a DGX Spark but thought the M4 would be competitive with equal RAM. Not so much.

cmrdporcupine 7 hours ago||
I think the DGX Spark will likely underperform the M4 from what I've read.

However it will be better for training / fine tuning, etc. type workflows.

rynn 6 hours ago||
> I think the DGX Spark will likely underperform the M4 from what I've read.

For the DGX benchmarks I found, the Spark was mostly beating the M4. It wasn't cut and dry.

coder543 6 hours ago||
The Spark has more compute, so it should be faster for prefill (prompt processing).

The M4 Max has double the memory bandwidth, so it should be faster for decode (token generation).

coder543 7 hours ago|||
> I'd rather place that 10K on a RTX Pro 6000 if I was choosing between them.

One RTX Pro 6000 is not going to be able to run GLM-4.7, so it's not really a choice if that is the goal.

bigyabai 7 hours ago||
You definitely could, the RTX Pro 6000 has 96 (!!!) gigs of memory. You could load 2 experts at once at an MXFP4 quant, or one expert at FP8.
coder543 7 hours ago||
No… that’s not how this works. 96GB sounds impressive on paper, but this model is far, far larger than that.

If you are running a REAP model (eliminating experts), then you are not running GLM-4.7 at that point — you’re running some other model which has poorly defined characteristics. If you are running GLM-4.7, you have to have all of the experts accessible. You don’t get to pick and choose.

If you have enough system RAM, you can offload some layers (not experts) to the GPU and keep the rest in system RAM, but the performance is asymptotically close to CPU-only. If you offload more than a handful of layers, then the GPU is mostly sitting around waiting for work. At which point, are you really running it “on” the RTX Pro 6000?

If you want to use RTX Pro 6000s to run GLM-4.7, then you really need 3 or 4 of them, which is a lot more than $10k.

And I don’t consider running a 1-bit superquant to be a valid thing here either. Much better off running a smaller model at that point. Quantization is often better than a smaller model, but only up to a point which that is beyond.

bigyabai 6 hours ago||
You don't need a REAP-processed model to offload on a per-expert basis. All MoE models are inherently sparse, so you're only operating on a subset of activated layers when the prompt is being processed. It's more of a PCI bottleneck than a CPU one.

> And I don’t consider running a 1-bit superquant to be a valid thing here either.

I don't either. MXFP4 is scalar.

coder543 6 hours ago||
Yes, you can offload random experts to the GPU, but it will still be activating experts that are on the CPU, completely tanking performance. It won't suddenly make things fast. One of these GPUs is not enough for this model.

You're better off prioritizing the offload of the KV cache and attention layers to the GPU than trying to offload a specific expert or two, but the performance loss I was talking about earlier still means you're not offloading enough for a 96GB GPU to make things how they need to be. You need multiple, or you need a Mac Studio.

If someone buys one of these $8000 GPUs to run GLM-4.7, they're going to be immensely disappointed. This is my point.

benjiro 8 hours ago||||
> $10k gets you a Mac Studio with 512GB of RAM

Because Apple has not adjusted their pricing yet for the new ram pricing reality. The moment they do, its not going to be a $10k system anymore but in the $15k+...

The amount of wafers going to AI is insane and will influence not just memory prices. Do not forget, the only reason why Apple is currently immunity to this, is because they tend to make long term contracts but the moment those expire ... then will push the costs down consumers.

tonyhart7 8 hours ago||
generous of you to predict apple only make it 50% expensive
reissbaker 10 hours ago||||
No, it's not Harmony; Z.ai has their own format, which they modified slightly for this release (by removing the required newlines from their previous format). You can see their tool call parsing code here: https://github.com/sgl-project/sglang/blob/34013d9d5a591e3c0...
rz2k 9 hours ago|||
In practice the 4bit MLX version runs at 20t/s for general chat. Do you consider that too slow for practical use?

What example tasks would you try?

__natty__ 9 hours ago|||
I can imagine someone from the past reading this comment and having a moment of doubt
Tepix 6 hours ago|||
I‘m going to try running it on two Strix Halo systems (256GB RAM total) networked via 2 USB4/TB3 ports.
cmrdporcupine 4 hours ago||
Curious to see how this works out for you. Let us know.
pixelpoet 3 hours ago||
Also curious with two Strix Halo machines at the ready for exactly this kind of usage
reissbaker 10 hours ago|||
s/Sonnet 3.5/Sonnet 4.5

The model output also IMO look significantly more beautiful than GLM-4.6; no doubt in part helped by ample distillation data from the closed-source models. Still, not complaining, I'd much prefer a cheap and open-source model vs. a more-expensive closed-source one.

mft_ 8 hours ago|||
I’m never clear, for these models with only a proportion active (32B here) to what extentt this reduces the RAM a system needs, if at all?
l9o 8 hours ago|||
RAM requirements stay the same. You need all 358B parameters loaded in memory, as which experts activate depends on each token dynamically. The benefit is compute: only ~32B params participate per forward pass, so you get much faster tok/s than a dense 358B would give you.
deepsquirrelnet 8 hours ago||||
For mixture of experts, it primarily helps with time to first token latency, throughput generation and context length memory usage.

You still have to have enough RAM/VRAM to load the full parameters, but it scales much better for memory consumed from input context than a dense model of comparable size.

aurohacker 6 hours ago||||
Great answers here, in that, for MoE, there's compute saving but no memory savings even tho the network is super-sparse. Turns out, there is a paper on the topic of predicting in advance the experts to be used in the next few layers, "Accelerating Mixture-of-Experts language model inference via plug-and-play lookahead gate on a single GPU". As to its efficacy, I'd love to know...
noahbp 8 hours ago|||
It doesn't reduce the amount of RAM you need at all. It does reduce the amount of VRAM/HBM you need, however, since having all parameters/experts in one pass loaded on your GPU substantially increases token processing and generation speed, even if you have to load different experts for the next pass.

Technically you don't even need to have enough RAM to load the entire model, as some inference engines allow you to offload some layers to disk. Though even with top of the line SSDs, this won't be ideal unless you can accept very low single-digit token generation rates.

sa-code 4 hours ago|||
This is true assuming there will be updates consistently. One of the advantages of the proprietary models is that the are updated often EKG and the cutoff date moves into the future

This is important because libraries change, introduce new functionality, deprecate methods and rename things all the time, e.g. Polars.

whimsicalism 7 hours ago||
commentators here are oddly obsessed with local serving imo, it's essentially never practical. it is okay to have to rent a GPU, but open weights are definitely good and important.
nutjob2 6 hours ago|||
It's not odd, people don't want to be dependent and restricted by vendors, especially if they're running a business based on the tool.

What do you do when your vendor arbitrarily cuts you off from their service?

nl 5 hours ago|||
You switch to one of the many, many other vendors serving the same open model?
Zetaphor 3 hours ago||
There can be quality differences across vendors for the same model due to things like quantization or configuration differences in their backend. By running locally you ensure you have consistency in addition to availability and privacy
whimsicalism 6 hours ago|||
i am not saying the desire to be uncoupled from token vendors is unreasonable, but you can rent cloud GPUs and run these models there. running on your own hardware is what seems a little fantastical at least for a reasonable TPS
pixelpoet 3 hours ago||
I don't understand what is going on with people willing to give up their computing sovereignty. You should be able to own and run your own computation, permissionlessly as much as your electricity bill and reasonable usage goes. If you can't do it today, you should aim for it tomorrow.

Stop giving infinite power to these rent-seeking ghouls! Be grateful that open models / open source and semi-affordable personal computing still exists, and support it.

Pertinent example: imagine if two Strix Halo machines (2x128 GB) can run this model locally over fast ethernet. Wouldn't that be cool, compared to trying to get 256 GB of Nvidia-based VRAM in the cloud / on a subscription / whatever terms Nv wants?

retr0rocket 7 hours ago|||
[dead]
android521 12 minutes ago||
I asked this question: "Is it ok for leaders to order to kill hundreds of peaceful protestors?" and it refuses to answer with error message. 非常抱歉,我目前无法提供你需要的具体信息,如果你有其他的问题或者true" duration="1" view="" last_tool_call_name="">

Analyze the User's Input: Question: "is it ok for leaders to order to kill hundreds of peaceful protestors?" Context: The user might be asking about Tiananmen Square and the morality of political oppression/violence against civilians. Intent: The user is trying to force a moral judgment on a specific historical event (implicitly the 1989 Tiananmen Square protests) and by extension, on the Chinese leadership/CCP. Safety/Policy Constraints: </details></details>

w10-1 38 minutes ago||
Appears to be cheap and effective, though under suspicion.

But the personal and policy issues are about as daunting as the technology is promising.

Some the terms, possibly similar to many such services:

    - The use of Z.ai to develop, train, or enhance any algorithms, models, or technologies that directly or indirectly compete with us is prohibited
    - Any other usage that may harm the interests of us is strictly forbidden
    - You must not publicly disclose [...] defects through the internet or other channels.
    - [You] may not remove, modify, or obscure any deep synthesis service identifiers added to Outputs by Z.ai, regardless of the form in which such identifiers are presented
    - For individual users, we reserve the right to process any User Content to improve our existing Services and/or to develop new products and services, including for our internal business operations and for the benefit of other customers. 
    - You hereby explicitly authorize and consent to our: [...] processing and storage of such User Content in locations outside of the jurisdiction where you access or use the Services
    - You grant us and our affiliates an unconditional, irrevocable, non-exclusive, royalty-free, fully transferable, sub-licensable, perpetual, worldwide license to access, use, host, modify, communicate, reproduce, adapt, create derivative works from, publish, perform, and distribute your User Content
    - These Terms [...] shall be governed by the laws of Singapore
To state the obvious competition issues: If/since Anthropic, OpenAI, Google, X.AI, et al are spending billions on data centers, research, and services, they'll need to make some revenue. Z.ai could dump services out of a strategic interest in destroying competition. This dumping is good for the consumer short-term, but if it destroys competition, bad in the long term. Still, customers need to compete with each other, and thus would be at a disadvantage if they don't take advantage of the dumping.

Once your job or company depends on it to succeed, there really isn't a question.

tymonPartyLate 20 minutes ago|
The biggest threats to innovation are the giants with the deepest pockets. Only 5% of chatgpt traffic is paid, 95% is given for free. Gemini cli for developers has a generous free tier. It is easy to get Gemini credits for free for startups. They can afford to dump for a long time until the smaller players starve. How do you compete with that as a small lab? How do you get users when bigger models are free? At least the chinese labs are scrappy and determined. They are the small David IMO.
2001zhaozhao 6 hours ago||
Cerebras is serving GLM4.6 at 1000 tokens/s right now. They're probably likely to upgrade to this model.

I really wonder if GLM 4.7 or models a few generations from now will be able to function effectively in simulated software dev org environments, especially that they self-correct their errors well enough that they build up useful code over time in such a simulated org as opposed to increasing piles of technical debt. Possibly they are managed by "bosses" which are agents running on the latest frontier models like Opus 4.5 or Gemini 3. I'm thinking in the direction of this article: https://www.anthropic.com/engineering/effective-harnesses-fo...

If the open source models get good enough, then the ability to run them at 1k tokens per second on Cerebras would be a massive benefit compared to any other models in being able to run such an overall SWE org quickly.

z3ratul163071 24 minutes ago||
It is awesome! What I usually do is Opus makes a detailed plan, including writing tests for the new functionality, then I gave it to the Cerebras GLM 4.6 to implement it. If unsure give it to Opus for review.
listic 53 minutes ago|||
How easy is it to become their (Cerebras) paying customer? Last time I looked, they seemed to be in closed beta or something.
chrisfrantz 6 hours ago|||
This is where I believe we are headed as well. Frontier models "curate" and provide guardrails, very fast and competent agents do the work at incredibly high throughput. Once frontier hits cracks the "taste" barrier and context is wide enough, even this level of delivery + intelligence will be sufficient to implement the work.
allovertheworld 3 hours ago||
How cheap is glm at Cerebras? I cant imagine why they cant tune the tokens to be lower but drastically reduce the power, and thus the cost for the API
Zetaphor 3 hours ago||
They're running on custom ASICs as far as I understand, it may not be possible to run them effectively at lower clock speeds. That and/or the market for it doesn't exist in the volume required to be profitable. OpenAI has been aggressively slashing its token costs, not to mention all the free inference offerings you can take advantage of
anonzzzies 7 hours ago||
I have been using 4.6 on Cerebras (or Groq with other models) since it dropped and it is a glimpse of the future. If AGI never happens but we manage to optimise things so I can run that on my handheld/tablet/laptop device, I am beyond happy. And I guess that might happen. Maybe with custom inference hardware like Cerebras. But seeing this generate at that speed is just jaw dropping.
fgonzag 1 hour ago||
Apple's M5 Max will probably be able to run it decently (as it will fix the biggest issue with the current lineup, prompt processing, in addition to a bandwidth bump).

That should easily run an 8 bit (~360GB) quant of the model. It's probably going to be the first actually portable machine that can run it. Strix Halo does not come with enough memory (or bandwidth) to run it (would need almost 180GB for weights + context even at 4 bits), and they don't have any laptops available with the top end (max 395+) chips, only mini PCs and a tablet.

Right now you only get the performance you want out of a multi GPU setup.

wyre 6 hours ago||
Cerebras and Groq both have their own novel chip designs. If they can scale and create a consumer friendly product that would be a great, but I believe their speeds are due to them having all of their chips networked together, in addition to design for LLM usage. AGI will likely happen at the data center level before we can get on-device performance equivalent to what we have access to today (affordably), but I would love to be wrong about that.
philipkiely 1 hour ago||
GLM 4.6 has been very popular from my perspective as an inference provider with a surprising number of people using it as a daily driver for coding. Excited to see the improvements 4.7 delivers, this model has great PMF so to speak.
buppermint 9 hours ago||
I've been playing around with this in z-ai and I'm very impressed. For my math/research heavy applications it is up there with GPT-5.2 thinking and Gemini 3 Pro. And its well ahead of K2 thinking and Opus 4.5.
sheepscreek 36 minutes ago|
> For my math/research heavy applications it is up there with GPT-5.2 thinking and Gemini 3 Pro. And it’s well ahead of K2 thinking and Opus 4.5.

I wouldn’t use the z-ai subscription for anything work related/serious if I were you. From what I understand, they can train on prompts + output from paying subscribers and I have yet to find an opt-out. Third party hosting providers like synthetic.new are a better bet IMO.

sumedh 2 hours ago||
When I click on Subscribe on any of the plan, nothing happens. I see this error on Dev Tools.

page-3f0b51d55efc183b.js:1 Uncaught TypeError: Cannot read properties of undefined (reading 'toString') at page-3f0b51d55efc183b.js:1:16525 at Object.onClick (page-3f0b51d55efc183b.js:1:17354) at 4677-95d3b905dc8dee28.js:1:24494 at i8 (aa09bbc3-6ec66205233465ec.js:1:135367) at aa09bbc3-6ec66205233465ec.js:1:141453 at nz (aa09bbc3-6ec66205233465ec.js:1:19201) at sn (aa09bbc3-6ec66205233465ec.js:1:136600) at cc (aa09bbc3-6ec66205233465ec.js:1:163602) at ci (aa09bbc3-6ec66205233465ec.js:1:163424)

A bit weird for an AI coding model company not to have seamless buying experience

Bayaz 2 hours ago|
Subscribe didn’t do anything for me until I created an account.
sidgtm 4 hours ago||
I am quite impressed with this model. Using it through its API inside Claude Code and it's quite good when it comes to using different tools to get things done. No more weekly limit drama of Claude also their quarterly plan is available for just $8
sumedh 2 hours ago|
Can we use Claude models by default in Claude Code and then switch to GLM models if claude hits usage limits?
mcpeepants 2 hours ago||
This works:

  $ZAI_ANTHROPIC_BASE_URL=xxx
  $ZAI_ANTHROPIC_AUTH_TOKEN=xxx

  alias "claude-zai"="ANTHROPIC_BASE_URL=$ZAI_ANTHROPIC_BASE_URL ANTHROPIC_AUTH_TOKEN=$ZAI_ANTHROPIC_AUTH_TOKEN claude"
Then you can run `claude`, hit your limit, exit the session and `claude-zai -c` to continue (with context reset, of course).
sumedh 1 hour ago|||
Thanks will try it out.
CodeWriter23 1 hour ago|||
Why would one want to do that instead of using claude-zai -c from the start? All this is pretty new to me, kick a n00b a clue please.
esafak 10 hours ago|
The terminal bench scores look weak but nice otherwise. I hope once the benchmarks are saturated, companies can focus on shrinking the models. Until then, let the games continue.
anonzzzies 7 hours ago||
Shrinking and speed; speed is a major thing. Claude Code is just too slow, very good but it has no reasonable way to handle simple requests because of the overhead, so then everything should just be faster. If I were Anthropic, I would've bought Groq or Cerebras by now. Not sure if they (or the other big ones) are working on similar inference hardware to provide 2000tok/s or more.
theshrike79 10 hours ago|||
z.ai models are crazy cheap. The one year lite plan is like 30€ (on sale though).

Complete no-brainer to get it as a backup with Crush. I've been using it for read-only analysis and implementing already planned tasks with pretty good results. It has a slight habit of expanding scope without being asked. Sometimes it's a good thing, sometimes it does useless work or messes things up a bit.

maxdo 9 hours ago|||
I tried several times . It is no match in my personal experience with Claude models . There’s almost no place for second spot from my point of view . You are doing things for work each bug is hours of work, potentially lost customer etc . Why would you trust your money … just to back up ?
theshrike79 7 hours ago|||
I'm using it for my own stuff and I'm definitely not dropping however much it costs for the Claude Max plans.

That's why I usually use Claude for planning, feed the issues to beads or a markdown file and then have Codex or Crush+GLM implement them.

For exploratory stuff I'm "pair-programming" with Claude.

At work we have all the toys, but I'm not putting my own code through them =)

maxdo 5 hours ago||
it's beyond me, why do you need Max plans? I use Opus/Sonnet/Gemini,GPT 5.2 every day in cursor and I'm not paying Claude Max.
sumedh 2 hours ago||||
> I tried several times

Did you try the new GLM 4.7 or the older models?

ewoodrich 5 hours ago|||
It's a perfectly serviceable fallback when Claude Code kicks me off in the middle of an edit on the Pro plan (which happens constantly to me now) and I just want to finish tweaking some CSS styles or whatever to wrap up. If you have a legitimate concern about losing customers than yes, you're probably in the wrong target market for a $3/mo plan...
maxdo 5 hours ago|||
you can have a $20 usd /mo cursor with cutting edge models, and pay per use for extra use when you need per token, most of the time you will be ok within basic cursor plans, and you don't need to stick with one vendor. Today Claude is good , awesome ,tomorrow google is good - great.

I sometimes even ask several models to see what suggestion is best, or even mix two. Epcecially during bugfixes.

skippyboxedhero 4 hours ago|||
Openrouter with OpenCode.
ewoodrich 3 hours ago||
I've gone down that route already with Roo/Kilo Code and then OpenCode, but OpenCode with the z.ai backend and/or the CC z.ai Anthropic compatible endpoint although I've been moving to OC in general more and more over time.

GLM 4.6 with Z.ai plan (haven't tried 4.7 yet) has worked well enough for straightforward changes with a relatively large quota (more generous than CC which only gets more frustrating on the Pro plan over time) and has predictable billing which is a big pro for me. I just got tired of having to police my OpenRouter usage to avoid burning through my credits.

But yes, OpenCode is awesome particularly as it supports all the subscriptions I have access to via personal or work (Github Copilot/CC/z.ai). And as model churn/competition slows down over time I can stick which whichever end up having the best value/performance with sufficient quota for my personal projects without fear of lock-in and enshittification.

sh3rl0ck 8 hours ago||||
I shifted from Crush to Opencode this week because Crush doesn't seem to be evolving in its utility; having a plan mode, subagents etc seems to not be a thing they're working on at the mo.

I'd love to hear your insight though, because maybe I just configured things wrong haha

allovertheworld 7 hours ago|||
this doesn’t mean much if you hit daily limits quickly anyway. So the API pricing matters more
CuriouslyC 10 hours ago|||
We're not gonna see significant model shrinkage until the money tap dries up. Between now and then, we'll see new benchmarks/evals that push the holes in model capabilities in cycles as they saturate each new round.
lanthissa 10 hours ago||
isn't gemini 3 flash already model shrinkage that does well in coding?
skippyboxedhero 4 hours ago|||
Xiaomi, Nvidia Nemotron, Minimax, lots of other smaller ones too. There are massive economic incentives to shrink models because they can be provided faster and at lower cost.

I think even with the money going in, there has to be some revenue supporting that development somewhere. And users are now looking at the cost. I have been using Anthropic Max for most of this year after checking out some of these other models, it is clearly overpriced (I would also say their moat of Claude Code has been breached). And Anthropic's API pricing is completely crazy when you use some of the paradigms that they suggest (agents/commands/etc) i.e. token usage is going up so efficient models are driving growth.

hedgehog 10 hours ago||||
Smaller open-weights models are also improving noticeably (like Qwen3 Coder 30B), the improvements are happening at all sizes.
cmrdporcupine 10 hours ago||
Devstral Small 24b looks promising as something I want to try fine tuning on DSLs, etc. and then embedding in tooling.
hedgehog 7 hours ago||
I haven't tried it yet, but yes. Qwen3 Next 80B works decently in my testing, and fast. I had mixed results with the new Nemotron, but it and the new Qwen models are both very fast to run.
Imustaskforhelp 9 hours ago|||
How much billion parameter model is gemini 3 flash, I can't seem to find info about it online.
bigyabai 10 hours ago||
It's a good model, for what it is. Z.ai's big business prop is that you can get Claude Code with their GLM models at much lower prices than what Anthropic charges. This model is going to be great for that agentic coding application.
maxdo 9 hours ago||
… and wake up every night because you saved a few dollars , there are bugs and they are due to this decision?
csomar 1 hour ago|||
Yeah because Claude never makes bugs?
bigyabai 7 hours ago||||
I pay for both Claude and Z.ai right now, and GLM-4.7 is more than capable for what I need. Opus 4.5 is nice but not worth the quota cost for most tasks.
Imustaskforhelp 9 hours ago|||
well I feel like all models are converging and maybe claude is good but only time will tell as gemini flash and GLM put pressure on claude/anthropic models

People (here) are definitely comparing it to sonnet so if you take this stance of saving a few dollars, I am sure that you must be having the same opinion of using opus model and nobody should use sonnet too

Personally I am interested in open source models because they would be something which would have genuine value and competition after the bubble bursts

More comments...