Top
Best
New

Posted by Curiositry 1 day ago

How to run Qwen 3.5 locally(unsloth.ai)
176 points | 49 commentspage 2
b89kim 18 hours ago|
I’ve been benchmarking GGUF quants for Python tasks under some hardware configs.

  - 4090 : 27b-q4_k_m
  - A100: 27b-q6_k
  - 3*A100: 122b-a10b-q6_k_L
Using the Qwen team's "thinking" presets, I found that non-agentic coding performance doesn't feel significant leap over unquantized GPT-OSS-120B. It shows some hallucination and repetition for mujoco codes with default presence penalty. 27b-q4_k_m with 4090 generates 30~35 tok/s in good quality.
dryarzeg 13 hours ago|
That's quite a specific task for local models like these though (I mean mujoco), so it might be underrepresented in the training data or RL. I'm not sure if you will be able to see a significant leap in this direction in the next 0.5-2 years, although it's still possible.
b89kim 10 hours ago||
I’ve been testing these on other tasks—IK, Kalman filters, and UI/DB boilerplate. Qwen3.5 is multimodal and specialized for js/webdev or agentic coding. It’s not surprising MoE model have some limitations in specific area. I understand most LLM have limited ability in mathematical/physical reasoning. And I don't think these tasks represent general performance. I'm just sharing personal experiences for those curious.
KronisLV 18 hours ago||
I had an annoying issue in a setup with two Nvidia L4 cards where trying to run the MoE versions to get decent performance just didn't work with Ollama, seems the same as these:

https://github.com/ollama/ollama/issues/14419

https://github.com/ollama/ollama/issues/14503

So for now I'm back to Qwen 3 30B A3B, kind of a bummer, because the previous model is pretty fast but kinda dumb, even for simple tasks like on-prem code review!

krasikra 18 hours ago|
[dead]