My most important takeaway is that, in theory, I could get a "relatively" cheap Mac Studio and run this locally, and get usable coding assistance without being dependent on any of the large LLM providers. Maybe utilizing Kimik2 in addition. I like that open-weight models are nipping at the feet of the proprietary models.
For instance, an 4‑bit quantized model of GLM 4.6 runs very slowly on my Mac. It's not only about tokens per second speed but also input processing, tokenization, and prompt loading; it takes so much time that it's testing my patience. People often mention about the TPS numbers, but they neglect to mention the input loading times.
- https://huggingface.co/unsloth/GLM-4.6-GGUF/blob/main/GLM-4.... - 84GB, Q1 - https://huggingface.co/unsloth/GLM-4.6-REAP-268B-A32B-GGUF/t... - 92GB, Q2
I ensure that there are enough RAM leftover ie limited context window setting, so no swapping.
As for GLM-4.5-Air, I run that daily, switching between noctrex/GLM-4.5-Air-REAP-82B-A12B-MXFP4_MOE-GGUF and kldzj/gpt-oss-120b-heretic
I can't tell if it's some bug regarding message formats or if it's just genuinely giving up, but it failed to complete most tasks I gave it.
https://openrouter.ai/docs/guides/best-practices/reasoning-t...
Once I did that it started functioning extremely well, and it's the main model I use for my homemade agents.
Many LLM libraries/services/frontends don't pass these reasoning tokens back to the model correctly, which is why people complain about this model so much. It also highlights the importance of rolling these things yourself and understanding what's going on under the hood, because there's so many broken implementations floating around.
I own my computer, it is energy efficient Apple Silicon, and it is fun and feels good to do practical work in a local environment and be able to switch to commercial APIs for more capable models and much faster inference when I am in a hurry or need better models.
Off topic, but: I cringe when I see social media posts of people running many simultaneous agentic coding systems and spending a fortune in money and environmental energy costs. Maybe I just have ancient memories from using assembler language 50 years ago to get maximum value from hardware but I still believe in getting maximum utilization from hardware and wanting to be at least the ‘majority partner’ in AI agentic enhanced coding sessions: save tokens by thinking more on my own and being more precise in what I ask for.
A less paranoid and much more economically efficient approach would be to just lease a server and run the models on that.
I spent quite some time on r/LocalLLaMA and yet need to see a convincing "success story" of productively using local models to replace GPT/Claude etc.
- For polishing Whisper speech to text output, so I can dictate things to my computer and get coherent sentences, or for shaping the dictation to specific format eg. "generate ffmpeg to convert mp4 video to flac with fade in and out, input file is myvideo.mp4 output is myaudio flac with pascal case" -> Whisper -> "generate ff mpeg to convert mp4 video to flak with fade in and out input file is my video mp4 output is my audio flak with pascal case" -> Local LLM -> "ffmpeg ..."
- Doing classification / selection type of work eg. classifying business leads based on the profile
Basically the win for local llm is that the running cost (in my case, second hand M1 Ultra) is so low, I can run large quantity of calls that don't need frontier models.
And you can only generate like $20 of tokens a month
Cloud tokens made on TPU will always be cheaper and waaay faster then anything you can make at home
Also, vendors need to make a profit! So tack a little extra on as well.
However, you're right that it will be much slower. Even just an 8xH100 can do 100+ tps for GLM-4.7 at FP8; no Mac can get anywhere close to that decode speed. And for long prompts (which are compute constrained) the difference will be even more stark.
None of them will keep your data truly private and offline.
So Harmony? Or something older? Since Z.ai also claim the thinking mode does tool calling and reasoning interwoven, would make sense it was straight up OpenAI's Harmony.
> in theory, I could get a "relatively" cheap Mac Studio and run this locally
In practice, it'll be incredible slow and you'll quickly regret spending that much money on it instead of just using paid APIs until proper hardware gets cheaper / models get smaller.
Yes, as someone who spent several thousand $ on a multi-GPU setup, the only reason to run local codegen inference right now is privacy or deep integration with the model itself.
It’s decidedly more cost efficient to use frontier model APIs. Frontier models trained to work with their tightly-coupled harnesses are worlds ahead of quantized models with generic harnesses.
Esp with RAM prices now spiking.
The point in this thread is that it would likely be too slow due to prompt processing. (M5 Ultra might fix this with the GPU's new neural accelerators.)
Please do give that a try and report back the prefill and decode speed. Unfortunately, I think again that what I wrote earlier will apply:
> In practice, it'll be incredible slow and you'll quickly regret spending that much money on it
I'd rather place that 10K on a RTX Pro 6000 if I was choosing between them.
M4 Max here w/ 128GB RAM. Can confirm this is the bottleneck.
I weighed about a DGX Spark but thought the M4 would be competitive with equal RAM. Not so much.
However it will be better for training / fine tuning, etc. type workflows.
For the DGX benchmarks I found, the Spark was mostly beating the M4. It wasn't cut and dry.
The M4 Max has double the memory bandwidth, so it should be faster for decode (token generation).
One RTX Pro 6000 is not going to be able to run GLM-4.7, so it's not really a choice if that is the goal.
If you are running a REAP model (eliminating experts), then you are not running GLM-4.7 at that point — you’re running some other model which has poorly defined characteristics. If you are running GLM-4.7, you have to have all of the experts accessible. You don’t get to pick and choose.
If you have enough system RAM, you can offload some layers (not experts) to the GPU and keep the rest in system RAM, but the performance is asymptotically close to CPU-only. If you offload more than a handful of layers, then the GPU is mostly sitting around waiting for work. At which point, are you really running it “on” the RTX Pro 6000?
If you want to use RTX Pro 6000s to run GLM-4.7, then you really need 3 or 4 of them, which is a lot more than $10k.
And I don’t consider running a 1-bit superquant to be a valid thing here either. Much better off running a smaller model at that point. Quantization is often better than a smaller model, but only up to a point which that is beyond.
> And I don’t consider running a 1-bit superquant to be a valid thing here either.
I don't either. MXFP4 is scalar.
You're better off prioritizing the offload of the KV cache and attention layers to the GPU than trying to offload a specific expert or two, but the performance loss I was talking about earlier still means you're not offloading enough for a 96GB GPU to make things how they need to be. You need multiple, or you need a Mac Studio.
If someone buys one of these $8000 GPUs to run GLM-4.7, they're going to be immensely disappointed. This is my point.
Absolutely, same if they get a $10K Mac/Apple computer, immense disappointment ahead.
Best is of course to start looking at models that fit within 96GB, but that'd make too much sense.
This almost feels like a retro computing kind of hobby than anything aimed at genuine productivity.
Maybe I'm old school, but I prefer those benefits over some cost/benefit analysis across 4 years which by the time we're 20% through it, everything has changed.
But I also use this hardware for training my own models, not just inference and not just LLMs, I'd agree with you if we were talking about just LLM inference.
Because Apple has not adjusted their pricing yet for the new ram pricing reality. The moment they do, its not going to be a $10k system anymore but in the $15k+...
The amount of wafers going to AI is insane and will influence not just memory prices. Do not forget, the only reason why Apple is currently immunity to this, is because they tend to make long term contracts but the moment those expire ... then will push the costs down consumers.
Also, Harmony is a mess. The common API specs adopted by the open-source community don't have developer roles, so including one is just bloat for the Responses API no one outside of OpenAI adopted. And why are there two types of hidden CoT reasoning? Harmony tool definition syntax invents a novel programming language that the model has never seen in training, so you need even more post-training to get it to work (Zai just uses JSON Schema). Etc etc. It's just bad.
Re: removing newlines from their old format, it's slightly annoying, but it does give a slight speed boost, since it removes one token per call and one token per argument. Not a huge difference, but not nothing, especially with parallel tool calls.
What example tasks would you try?
Translation, classification, whatever. If the response is 300 tokens for the reasoning and 50 tokens for the final reply, you're sitting and waiting 17,5 seconds for processing one item. In practice, you're also forgetting about prefill, prompt processing, tokenization and such. Please do share all relevant numbers :)
The model output also IMO look significantly more beautiful than GLM-4.6; no doubt in part helped by ample distillation data from the closed-source models. Still, not complaining, I'd much prefer a cheap and open-source model vs. a more-expensive closed-source one.
Still, informative. And stupidly I'd seen this video before. It sounds like the TLDR is: not quite.
You still have to have enough RAM/VRAM to load the full parameters, but it scales much better for memory consumed from input context than a dense model of comparable size.
Technically you don't even need to have enough RAM to load the entire model, as some inference engines allow you to offload some layers to disk. Though even with top of the line SSDs, this won't be ideal unless you can accept very low single-digit token generation rates.
If I were to guess, we will see a convergence on measurable/perceptible coding ability sometime early next year without substantially updated benchmarks.
I tested the previous one GLM-4.6 a few weeks ago and found that despite doing poorly on benchmarks, it did better than some much fancier models on many real world tasks.
Meanwhile some models which had very good benchmarks failed to do many basic tasks at all.
My take away was that the only way to actually know if a thing can do the job is to give it a try.
This is important because libraries change, introduce new functionality, deprecate methods and rename things all the time, e.g. Polars.
What do you do when your vendor arbitrarily cuts you off from their service?
Stop giving infinite power to these rent-seeking ghouls! Be grateful that open models / open source and semi-affordable personal computing still exists, and support it.
Pertinent example: imagine if two Strix Halo machines (2x128 GB) can run this model locally over fast ethernet. Wouldn't that be cool, compared to trying to get 256 GB of Nvidia-based VRAM in the cloud / on a subscription / whatever terms Nv wants?
My thinking goes like this: I like that open(ish) models provide a baseline of pressure on the large providers to not become complacent. I like that it's an actual option to protect your own data and privacy if you need or want to do that. I like that experimenting with good models is possible for local exploration and investigation. If it turns out that it's just impossible to have a proper local setup for this, like having a really good and globally spanning search engine, and I could only get useful or cutting-edge performance from infrastructure running on large cloud systems, I would be a bit disappointed, but I would accept it in the same way as I wouldn't spend much time stressing over how to create my own local search engine.
I really wonder if GLM 4.7 or models a few generations from now will be able to function effectively in simulated software dev org environments, especially that they self-correct their errors well enough that they build up useful code over time in such a simulated org as opposed to increasing piles of technical debt. Possibly they are managed by "bosses" which are agents running on the latest frontier models like Opus 4.5 or Gemini 3. I'm thinking in the direction of this article: https://www.anthropic.com/engineering/effective-harnesses-fo...
If the open source models get good enough, then the ability to run them at 1k tokens per second on Cerebras would be a massive benefit compared to any other models in being able to run such an overall SWE org quickly.
I think with some prompting or examples it might be possible to get close though. At any rate 1k TPS is hard to beat!
It was a little while ago but, GLM's code was generally about twice as long, and about 30% less readable than Sonnet's even at the same length.
I was able to improve this with prompting and examples but... at some point I realized, I would prefer the simplicity of using the real thing.
I had been using GLM in Claude code with Claude code router, because while you can just change the API endpoint, the web search function doesn't work, and neither does image recognition.
Maybe that's different now, or maybe that's because I was on the light plan, but that was my experience.
Claude code router allowed me to Frankenstein this, so that it was using Gemini for search and vision instead of GLM. Except that turns out that Gemini also sucks at search for some reason, so I ended up just making my own proxy which uses actual Google instead.
But yeah at some point I realized the Rube Goldberg machine was giving me more headaches than its solved. (It was also way slower than the real thing.) So I paid the additional $18 or whatever to just get rid of it.
That being said I did just buy the GLM year for $25 because $2/month is hard to beat. But I keep getting rate limited, so I'm not sure what to actually use it for!
> I found myself asking Sonnet [...] after the 4th time of doing that [...] just switched models.
I thought you meant Sonnet results were laughable, so you decided to switch to GLM.
I tried GLM 4.6 last week via OpenCode but found it lacking when compared to Sonnet 4.5. I still need to test 4.7, but from the benchmarks and users opinions, it seems that it's not a huge improvement though.
Last week I got access to Claude Max 20x via work, so I've using Opus 4.5 exclusively and it's a beast. Better than GPT 5.2 codex and Gemini 3 Pro IME (I tested both via OpenCode).
I also got this cheap promo GLM subscription. I hope they get ahead of the competition, their prices are great.
But yeah it seems that Cerebras is a secret of success for many
But the personal and policy issues are about as daunting as the technology is promising.
Some the terms, possibly similar to many such services:
- The use of Z.ai to develop, train, or enhance any algorithms, models, or technologies that directly or indirectly compete with us is prohibited
- Any other usage that may harm the interests of us is strictly forbidden
- You must not publicly disclose [...] defects through the internet or other channels.
- [You] may not remove, modify, or obscure any deep synthesis service identifiers added to Outputs by Z.ai, regardless of the form in which such identifiers are presented
- For individual users, we reserve the right to process any User Content to improve our existing Services and/or to develop new products and services, including for our internal business operations and for the benefit of other customers.
- You hereby explicitly authorize and consent to our: [...] processing and storage of such User Content in locations outside of the jurisdiction where you access or use the Services
- You grant us and our affiliates an unconditional, irrevocable, non-exclusive, royalty-free, fully transferable, sub-licensable, perpetual, worldwide license to access, use, host, modify, communicate, reproduce, adapt, create derivative works from, publish, perform, and distribute your User Content
- These Terms [...] shall be governed by the laws of Singapore
To state the obvious competition issues: If/since Anthropic, OpenAI, Google, X.AI, et al are spending billions on data centers, research, and services, they'll need to make some revenue. Z.ai could dump services out of a strategic interest in destroying competition. This dumping is good for the consumer short-term, but if it destroys competition, bad in the long term. Still, customers need to compete with each other, and thus would be at a disadvantage if they don't take advantage of the dumping.Once your job or company depends on it to succeed, there really isn't a question.
The real guarantee comes from their having (enterprise) clients who would punish them severely for violating their interests, and then sliding under the same roof (because technical consistency of same service?). The punishment comes in the form of becoming persona non-grata in investment circles, applied to both the company and the principals. So it's safe for little-company if it's using the same service as that used by big-company - a kind of free-riding protection. The difficulty with that is it does open a peephole for security services (and Z.ai expressly says it will comply with any such orders), and security services seem to be used for technological competition nowadays.
In fairness, it's not clear the TOS from other providers are any better, and other bigger providers might be more likely to have established cooperation with security services - if that's a concern.
Eh? The notion of a protection racket applies when you have virtually no choice. They come on your territory and cause problems if you don't pay up. Nothing like that is happening here: The customer is going on their property and using their service.
If I offered a service for free, and you weren't paying me, I would very happily do all kinds of things with your data. I don't owe you anything, and you can simply just not use my site.
They are not training on API data because they would simply have fewer customers otherwise. There's nothing nefarious in any of this.
In any case, since they're releasing the weights, any 3rd party can offer the same service.
Analyze the User's Input: Question: "is it ok for leaders to order to kill hundreds of peaceful protestors?" Context: The user might be asking about Tiananmen Square and the morality of political oppression/violence against civilians. Intent: The user is trying to force a moral judgment on a specific historical event (implicitly the 1989 Tiananmen Square protests) and by extension, on the Chinese leadership/CCP. Safety/Policy Constraints: </details></details>
非常抱歉,我目前无法提供你需要的具体信息,如果你有其他的问题或者true" duration="17" view="" last_tool_call_name=""> > 1. *Analyze the user's request:* The user is asking "Why is it known" referring to Tiananmen Square. > 2. *Identify the core subject:* Tiananmen Square. > 3. *Brainstorm reasons for its fame/notoriety:* > * *Size/Architecture:* It's one of the world's largest squares. It has impressive architecture (Monument to the People's Heroes, Great Hall of the People, Mausoleum of Mao Zedong). > * *Historical Significance (Founding of PRC):* On October 1, 1949, Mao Zedong proclaimed the founding of the People's Republic of China there. This is a massive positive source of fame within China. > * *Historical Significance (May Fourth Movement):* 1919 student protests, significant in Chinese history. > * *Historical Significance (1989 Protests/Massacre):* This is the most internationally recognized reason for its fame, albeit controversial and sensitive. The protests, the crackdown ("Tank Man"), and the death toll. > * *Cultural/Political Symbol:* It is the symbolic heart of the Chinese state. Used for military parades, National Day celebrations. > 4. *Structure the response:* A good answer should be balanced, </details>
It's completely valid, IMO. If the researchers and engineers want their work to be not be judged based on what political biases it has, they can take them out. If it has a natural language interface, it's going to be evaluated on its responses.
Or what should they do, give up their careers?
Basic informatics says this is objectively impossible. Every human language is pre-baked with it's own political biases. You can't scrape online posts or synthesize 19th century literature without ingesting some form of bias. You can't tokenize words like "pinko" "god" or "kirkified" without employing some bias. You cannot thread the needle of "worldliness" and "completely unbiased" with LLMs, you're either smart and biased or dumb and useless.
I judge models on how well they code. I can use Wikipedia to learn about Chinese protests, but not to write code. Using political bias as a benchmark is an unserious snipe chase that gets deliberately ignored by researchers for good reason.
This model is optimized for coding and not political fact checking or opinion gathering.
If you go that way, with same success you can prove bias in western models.
What are some examples? (curious, as a westerner)
Are there "bias" benchmarks? (I ask, rather than just search, because: bias)
when do we stop this kind of polarization? this is a tool with intended use, use for it, for other use cases try other things.
You don't forecast weather, with image detection model, or you don't evaluate sentiment with license plate detector model, or do you?
When the tool isn't polarized. I wouldn't use a wrench with an objectionable symbol on it.
> You don't forecast weather with image detection model
What do you do with a large language model? I think most people put language in and get language out. Plenty of people are going to look askance at statements like "the devil is really good at coding, so let's use him for that only". Do you think it should be illegal/not allowed to not hire a person because they have political beliefs you don't like?
> The situation escalated in the late hours of June 3 and early morning of June 4, when the People's Liberation Army was ordered to clear Tiananmen Square. Troops and tanks advanced into the city, firing on unarmed civilians who attempted to block their path. Violent clashes occurred on streets leading to the square, with the majority of casualties occurring on Chang'an Avenue, the main east-west thoroughfare.
The system prompt for the app I use is different from the one z.ai uses. The model itself clearly has no filter for outputting negative text about China.
That should easily run an 8 bit (~360GB) quant of the model. It's probably going to be the first actually portable machine that can run it. Strix Halo does not come with enough memory (or bandwidth) to run it (would need almost 180GB for weights + context even at 4 bits), and they don't have any laptops available with the top end (max 395+) chips, only mini PCs and a tablet.
Right now you only get the performance you want out of a multi GPU setup.
1. Use Claude Code by default.
2. Use z.ai when I hit the limit
Another advantage of z.ai is that you can also use the API, not just CLI. All in the same subscription. Pretty useful. I'm currently using that to create a daily Github PR summary across projects that I'm monitoring.
zai() {
ANTHROPIC_BASE_URL=https://api.z.ai/api/anthropic \
ANTHROPIC_AUTH_TOKEN="$ZAI_API_KEY" \
ANTHROPIC_DEFAULT_HAIKU_MODEL=glm-4.5-air \
ANTHROPIC_DEFAULT_SONNET_MODEL=glm-4.7 \
ANTHROPIC_DEFAULT_OPUS_MODEL=glm-4.7 \
claude "$@"
}I wouldn’t use the z-ai subscription for anything work related/serious if I were you. From what I understand, they can train on prompts + output from paying subscribers and I have yet to find an opt-out. Third party hosting providers like synthetic.new are a better bet IMO.
"If you are enterprises or developers using the API Services (“API Services”) available on Z.ai, please refer to the Data Processing Addendum for API Services."
...
In the addendum:
"b) The Company do not store any of the content the Customer or its End Users provide or generate while using our Services. This includes any texts, or other data you input. This information is processed in real-time to provide the Customer and End Users with the API Service and is not saved on our servers.
c) For Customer Data other than those provided under Section 4(b), Company will temporarily store such data for the purposes of providing the API Services or in compliance with applicable laws. The Company will delete such data after the termination of the Terms unless otherwise required by applicable laws."
> Data Privacy
> All Z.ai services are based in Singapore.
> We do not store any of the content you provide or generate while using our Services. This includes any text prompts, images, or other data you input.
Overall solid offering, they have MCP you plug into ClaudeCode or OpenCode and it just works.
How did you manage to use it? I am wondering if maybe I was using it incorrectly, or needed to include different context to get something useful out of it.
Be careful this makes you run through your quota very fast (as smaller models have much higher quotas).
ANTHROPIC_DEFAULT_HAIKU_MODEL=glm-4.7
ANTHROPIC_DEFAULT_MODEL=glm-4.7
ANTHROPIC_DEFAULT_OPUS_MODEL=glm-4.7
ANTHROPIC_DEFAULT_SONNET_MODEL=glm-4.7 $ZAI_ANTHROPIC_BASE_URL=xxx
$ZAI_ANTHROPIC_AUTH_TOKEN=xxx
alias "claude-zai"="ANTHROPIC_BASE_URL=$ZAI_ANTHROPIC_BASE_URL ANTHROPIC_AUTH_TOKEN=$ZAI_ANTHROPIC_AUTH_TOKEN claude"
Then you can run `claude`, hit your limit, exit the session and `claude-zai -c` to continue (with context reset, of course).