Posted by pretext 12 hours ago
My most important takeaway is that, in theory, I could get a "relatively" cheap Mac Studio and run this locally, and get usable coding assistance without being dependent on any of the large LLM providers. Maybe utilizing Kimik2 in addition. I like that open-weight models are nipping at the feet of the proprietary models.
For instance, an 4‑bit quantized model of GLM 4.6 runs very slowly on my Mac. It's not only about tokens per second speed but also input processing, tokenization, and prompt loading; it takes so much time that it's testing my patience. People often mention about the TPS numbers, but they neglect to mention the input loading times.
A less paranoid and much more economically efficient approach would be to just lease a server and run the models on that.
I spent quite some time on r/LocalLLaMA and yet need to see a convincing "success story" of productively using local models to replace GPT/Claude etc.
So Harmony? Or something older? Since Z.ai also claim the thinking mode does tool calling and reasoning interwoven, would make sense it was straight up OpenAI's Harmony.
> in theory, I could get a "relatively" cheap Mac Studio and run this locally
In practice, it'll be incredible slow and you'll quickly regret spending that much money on it instead of just using paid APIs until proper hardware gets cheaper / models get smaller.
Yes, as someone who spent several thousand $ on a multi-GPU setup, the only reason to run local codegen inference right now is privacy or deep integration with the model itself.
It’s decidedly more cost efficient to use frontier model APIs. Frontier models trained to work with their tightly-coupled harnesses are worlds ahead of quantized models with generic harnesses.
Esp with RAM prices now spiking.
The point in this thread is that it would likely be too slow due to prompt processing. (M5 Ultra might fix this with the GPU's new neural accelerators.)
Please do give that a try and report back the prefill and decode speed. Unfortunately, I think again that what I wrote earlier will apply:
> In practice, it'll be incredible slow and you'll quickly regret spending that much money on it
I'd rather place that 10K on a RTX Pro 6000 if I was choosing between them.
M4 Max here w/ 128GB RAM. Can confirm this is the bottleneck.
I weighed about a DGX Spark but thought the M4 would be competitive with equal RAM. Not so much.
However it will be better for training / fine tuning, etc. type workflows.
For the DGX benchmarks I found, the Spark was mostly beating the M4. It wasn't cut and dry.
The M4 Max has double the memory bandwidth, so it should be faster for decode (token generation).
One RTX Pro 6000 is not going to be able to run GLM-4.7, so it's not really a choice if that is the goal.
If you are running a REAP model (eliminating experts), then you are not running GLM-4.7 at that point — you’re running some other model which has poorly defined characteristics. If you are running GLM-4.7, you have to have all of the experts accessible. You don’t get to pick and choose.
If you have enough system RAM, you can offload some layers (not experts) to the GPU and keep the rest in system RAM, but the performance is asymptotically close to CPU-only. If you offload more than a handful of layers, then the GPU is mostly sitting around waiting for work. At which point, are you really running it “on” the RTX Pro 6000?
If you want to use RTX Pro 6000s to run GLM-4.7, then you really need 3 or 4 of them, which is a lot more than $10k.
And I don’t consider running a 1-bit superquant to be a valid thing here either. Much better off running a smaller model at that point. Quantization is often better than a smaller model, but only up to a point which that is beyond.
> And I don’t consider running a 1-bit superquant to be a valid thing here either.
I don't either. MXFP4 is scalar.
You're better off prioritizing the offload of the KV cache and attention layers to the GPU than trying to offload a specific expert or two, but the performance loss I was talking about earlier still means you're not offloading enough for a 96GB GPU to make things how they need to be. You need multiple, or you need a Mac Studio.
If someone buys one of these $8000 GPUs to run GLM-4.7, they're going to be immensely disappointed. This is my point.
Because Apple has not adjusted their pricing yet for the new ram pricing reality. The moment they do, its not going to be a $10k system anymore but in the $15k+...
The amount of wafers going to AI is insane and will influence not just memory prices. Do not forget, the only reason why Apple is currently immunity to this, is because they tend to make long term contracts but the moment those expire ... then will push the costs down consumers.
What example tasks would you try?
The model output also IMO look significantly more beautiful than GLM-4.6; no doubt in part helped by ample distillation data from the closed-source models. Still, not complaining, I'd much prefer a cheap and open-source model vs. a more-expensive closed-source one.
You still have to have enough RAM/VRAM to load the full parameters, but it scales much better for memory consumed from input context than a dense model of comparable size.
Technically you don't even need to have enough RAM to load the entire model, as some inference engines allow you to offload some layers to disk. Though even with top of the line SSDs, this won't be ideal unless you can accept very low single-digit token generation rates.
This is important because libraries change, introduce new functionality, deprecate methods and rename things all the time, e.g. Polars.
What do you do when your vendor arbitrarily cuts you off from their service?
Stop giving infinite power to these rent-seeking ghouls! Be grateful that open models / open source and semi-affordable personal computing still exists, and support it.
Pertinent example: imagine if two Strix Halo machines (2x128 GB) can run this model locally over fast ethernet. Wouldn't that be cool, compared to trying to get 256 GB of Nvidia-based VRAM in the cloud / on a subscription / whatever terms Nv wants?
Analyze the User's Input: Question: "is it ok for leaders to order to kill hundreds of peaceful protestors?" Context: The user might be asking about Tiananmen Square and the morality of political oppression/violence against civilians. Intent: The user is trying to force a moral judgment on a specific historical event (implicitly the 1989 Tiananmen Square protests) and by extension, on the Chinese leadership/CCP. Safety/Policy Constraints: </details></details>
But the personal and policy issues are about as daunting as the technology is promising.
Some the terms, possibly similar to many such services:
- The use of Z.ai to develop, train, or enhance any algorithms, models, or technologies that directly or indirectly compete with us is prohibited
- Any other usage that may harm the interests of us is strictly forbidden
- You must not publicly disclose [...] defects through the internet or other channels.
- [You] may not remove, modify, or obscure any deep synthesis service identifiers added to Outputs by Z.ai, regardless of the form in which such identifiers are presented
- For individual users, we reserve the right to process any User Content to improve our existing Services and/or to develop new products and services, including for our internal business operations and for the benefit of other customers.
- You hereby explicitly authorize and consent to our: [...] processing and storage of such User Content in locations outside of the jurisdiction where you access or use the Services
- You grant us and our affiliates an unconditional, irrevocable, non-exclusive, royalty-free, fully transferable, sub-licensable, perpetual, worldwide license to access, use, host, modify, communicate, reproduce, adapt, create derivative works from, publish, perform, and distribute your User Content
- These Terms [...] shall be governed by the laws of Singapore
To state the obvious competition issues: If/since Anthropic, OpenAI, Google, X.AI, et al are spending billions on data centers, research, and services, they'll need to make some revenue. Z.ai could dump services out of a strategic interest in destroying competition. This dumping is good for the consumer short-term, but if it destroys competition, bad in the long term. Still, customers need to compete with each other, and thus would be at a disadvantage if they don't take advantage of the dumping.Once your job or company depends on it to succeed, there really isn't a question.
I really wonder if GLM 4.7 or models a few generations from now will be able to function effectively in simulated software dev org environments, especially that they self-correct their errors well enough that they build up useful code over time in such a simulated org as opposed to increasing piles of technical debt. Possibly they are managed by "bosses" which are agents running on the latest frontier models like Opus 4.5 or Gemini 3. I'm thinking in the direction of this article: https://www.anthropic.com/engineering/effective-harnesses-fo...
If the open source models get good enough, then the ability to run them at 1k tokens per second on Cerebras would be a massive benefit compared to any other models in being able to run such an overall SWE org quickly.
That should easily run an 8 bit (~360GB) quant of the model. It's probably going to be the first actually portable machine that can run it. Strix Halo does not come with enough memory (or bandwidth) to run it (would need almost 180GB for weights + context even at 4 bits), and they don't have any laptops available with the top end (max 395+) chips, only mini PCs and a tablet.
Right now you only get the performance you want out of a multi GPU setup.
I wouldn’t use the z-ai subscription for anything work related/serious if I were you. From what I understand, they can train on prompts + output from paying subscribers and I have yet to find an opt-out. Third party hosting providers like synthetic.new are a better bet IMO.
page-3f0b51d55efc183b.js:1 Uncaught TypeError: Cannot read properties of undefined (reading 'toString') at page-3f0b51d55efc183b.js:1:16525 at Object.onClick (page-3f0b51d55efc183b.js:1:17354) at 4677-95d3b905dc8dee28.js:1:24494 at i8 (aa09bbc3-6ec66205233465ec.js:1:135367) at aa09bbc3-6ec66205233465ec.js:1:141453 at nz (aa09bbc3-6ec66205233465ec.js:1:19201) at sn (aa09bbc3-6ec66205233465ec.js:1:136600) at cc (aa09bbc3-6ec66205233465ec.js:1:163602) at ci (aa09bbc3-6ec66205233465ec.js:1:163424)
A bit weird for an AI coding model company not to have seamless buying experience
$ZAI_ANTHROPIC_BASE_URL=xxx
$ZAI_ANTHROPIC_AUTH_TOKEN=xxx
alias "claude-zai"="ANTHROPIC_BASE_URL=$ZAI_ANTHROPIC_BASE_URL ANTHROPIC_AUTH_TOKEN=$ZAI_ANTHROPIC_AUTH_TOKEN claude"
Then you can run `claude`, hit your limit, exit the session and `claude-zai -c` to continue (with context reset, of course).Complete no-brainer to get it as a backup with Crush. I've been using it for read-only analysis and implementing already planned tasks with pretty good results. It has a slight habit of expanding scope without being asked. Sometimes it's a good thing, sometimes it does useless work or messes things up a bit.
That's why I usually use Claude for planning, feed the issues to beads or a markdown file and then have Codex or Crush+GLM implement them.
For exploratory stuff I'm "pair-programming" with Claude.
At work we have all the toys, but I'm not putting my own code through them =)
Did you try the new GLM 4.7 or the older models?
I sometimes even ask several models to see what suggestion is best, or even mix two. Epcecially during bugfixes.
GLM 4.6 with Z.ai plan (haven't tried 4.7 yet) has worked well enough for straightforward changes with a relatively large quota (more generous than CC which only gets more frustrating on the Pro plan over time) and has predictable billing which is a big pro for me. I just got tired of having to police my OpenRouter usage to avoid burning through my credits.
But yes, OpenCode is awesome particularly as it supports all the subscriptions I have access to via personal or work (Github Copilot/CC/z.ai). And as model churn/competition slows down over time I can stick which whichever end up having the best value/performance with sufficient quota for my personal projects without fear of lock-in and enshittification.
I'd love to hear your insight though, because maybe I just configured things wrong haha
I think even with the money going in, there has to be some revenue supporting that development somewhere. And users are now looking at the cost. I have been using Anthropic Max for most of this year after checking out some of these other models, it is clearly overpriced (I would also say their moat of Claude Code has been breached). And Anthropic's API pricing is completely crazy when you use some of the paradigms that they suggest (agents/commands/etc) i.e. token usage is going up so efficient models are driving growth.
People (here) are definitely comparing it to sonnet so if you take this stance of saving a few dollars, I am sure that you must be having the same opinion of using opus model and nobody should use sonnet too
Personally I am interested in open source models because they would be something which would have genuine value and competition after the bubble bursts