Top
Best
New

Posted by scrlk 1/19/2026

GLM-4.7-Flash(huggingface.co)
378 points | 135 commentspage 2
river_otter 1/21/2026|
Excited to finally be able to give this a try today. I'm documenting my experience using aoe + OpenCode + LM Studio + GLM-4.7 Flash + Mac Mini M4Pro 64GB Mem on this thread if anyone wants to follow along and or give me advice about how badly I'm messing up the settings

https://x.com/natebrake/status/2013978241573204246

Thus far, the 6-bit quant MLX weights were too much and crashed LMS with OOM

dfajgljsldkjag 1/19/2026||
Interesting they are releasing a tiny (30B) variant, unlike the 4.5-air distill which was 106B parameters. It must be competing with gpt mini and nano models, which personally I have found to be pretty weak. But this could be perfect for local LLM use cases.

In my ime small tier models are good for simple tasks like translation and trivia answering, but are useless for anything more complex. 70B class and above is where models really start to shine.

jcuenod 1/19/2026||
Comparison to GPT-OSS-20B (irrespective of how you feel that model actually performs) doesn't fill me with confidence. Given GLM 4.7 seems like it could be competitive with Sonnet 4/4.5, I would have hoped that their flash model would run circles around GPT-OSS-120B. I do wish they would provide an Aider result for comparison. Aider may be saturated among SotA models, but it's not at this size.
syntaxing 1/19/2026||
Hoping a 30-A3B runs circles around a 117-A5.1B is a bit hopeful thinking, especially when you’re testing embedded knowledge. From the numbers, I think this model excels at agent calls compared to GPT-20B. The rest are about the same in terms of performance
victorbjorklund 1/19/2026|||
The benchmarks lie. I've been using using glm 4.7 and it's pretty okay with simple tasks but it's nowhere even near Sonnet. Still useful and good value but it's not even close.
unsupp0rted 1/19/2026||
> Given GLM 4.7 seems like it could be competitive with Sonnet 4/4.5

Not for code. The quality is so low, it's roughly on par with Sonnet 3.5

infocollector 1/19/2026||
Maybe someone here has tackled this before. I’m trying to connect Antigravity or Cursor with GLM/Qwen coding models, but haven’t had any luck so far. I can easily run Open-WebUI + LLaMA on my 5090 Ubuntu box without issues. However, when I try to point Antigravity or Cursor to those models, they don’t seem to recognize or access them. Has anyone successfully set this up?
yowlingcat 1/19/2026|
I don't believe Antigravity or Cursor work well with pluggable models. It seems to be impossible with Antigravity and with Cursor while you can change the OAI compatible API endpoint to one of your choice, not all features may work as expected.

My recommendation would be to use other tools built to support pluggable model backends better. If you're looking for a Claude Code alternative, I've been liking OpenCode so far lately, and if you're looking for a Cursor alternative, I've heard great things about Roo/Cline/KiloCode although I personally still just use Continue out of habit.

foobar10000 1/20/2026||
Claude code router
arbuge 1/19/2026||
Perhaps somebody more familiar with HF can explain this to me... I'm not too sure what's going on here:

https://huggingface.co/inference/models?model=zai-org%2FGLM-...

Mattwmaster58 1/19/2026|
I assume you're talking about 50t/s? My guess is that providers are poorly managing resources.

Slow inference is also present on z.ai, eyeballing it the 4.7 flash model was twice as slow as regular 4.7 right now.

arbuge 1/19/2026||
None of it makes much sense. The model labelled as fastest has much higher latency. The one labelled as cheapest costs something, whereas the other one appears to be free (price is blank). Context on that one is blank and also unclear.
syntaxing 1/19/2026||
I find GLM models so good. Better than Qwen IMO. I wish they released a new GLM air so I can run on my framework desktop
eurekin 1/19/2026||
I'm trying to run it, but getting odd errors. Has anybody managed to run it locally and can share the command?
veselin 1/19/2026||
What is the state of using quants? For chat models, a few errors or lost intelligence may matter a little. But what is happening to tool calling in coding agents? Does it fail catastrophically after a few steps in the agent?

I am interesting if I can run it on a 24GB RTX 4090.

Also, would vllm be a good option?

tgtweak 1/19/2026||
I like the byteshape quantizations - they are dynamic variable quantization weights that are tuned for quality vs overall size. They seem to make less errors at lower "average" quantizations than the unsloth 4 bit quants. I think this is similar to variable bitrate video compression where you can keep higher bits where it helps overall model accuracy.

Should be able to run this in 22GB vram so your 4090 (and a 3090) would be safe. This model also uses MLA so you can run pretty large context windows without eating up a ton of extra vram.

edit: 19GB vram for a Q4_K_M - MLX4 is around 21GB so you should be clear to run a lower quant version on the 4090. Full BF16 is close to 60GB so probably not viable.

omgwin 1/20/2026||
It's been mentioned that this model is MLA capable, but it seems like the default vLLM params don't use MLA. Seeing ~0.91MB KV Footprint per token right now. Are you getting MLA to work?
regularfry 1/20/2026||
It's in the ollama library at q4_K_M, which doesn't quite fit on my 4090 with the default context length. But it only offloads 8 layers to the CPU for me. I'm getting usable enough token rates. That's probably the easiest way to get it. Not tried it with vllm but if it proves good enough to stick with then I might give it a try.
regularfry 1/20/2026||
Oh, and on agents: I did give it a go in opencode last night and it seemed to get a bit stuck but I think I probably pushed it too far. I asked it to explain TinyRecursiveModels and pointed it at the git repo URL. It got very confused by the returned HTML and went into a loop. But actually getting to the point of getting content back from a tool call? Absolutely fine.

I'm thinking of giving it a go with aider, but using something like gemma3:27b as the architect. I don't think you can have different models for different skills in opencode, but with smaller local models I suspect it's unavoidable for now.

karmakaze 1/19/2026||
Not much info than being a 31B model. Here's info on GLM-4.7[0] in general.

I suppose Flash is merely a distillation of that. Filed under mildly interesting for now.

[0] https://z.ai/blog/glm-4.7

lordofgibbons 1/19/2026||
How interesting it is depends purely on your use-case. For me this is the perfect size for running fine-tuning experiments.
redrove 1/19/2026||
A3.9B MoE apparently
kylehotchkiss 1/19/2026|
What's the minimum hardware you need to run this at a reasonable speed?

My Mac Mini probably isn't up for the task, but in the future I might be interested in a Mac Studio just to churn at long-running data enrichment types of projects

metalliqaz 1/19/2026|
I haven't tried it, but just based on the size (30B-A3B), you probably can get by with 32GB RAM and 8GB VRAM.
More comments...