Top
Best
New

Posted by UmYeahNo 1 day ago

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

Curious to know your experience running local LLM's with a well spec'ed out M3 Ultra or M4 Pro Mac Studio. I don't see a lot of discussion on the Mac Studio for Local LLMs but it seems like you could put big models in memory with the shared VRAM. I assume that the token generation would be slow, but you might get higher quality results because you can put larger models in memory.
44 points | 27 comments
josefcub 13 hours ago|
I am! I moved from a shoebox Linux workstation with 32MB of RAM and a 12GB RTX 3060 to a 256GB M3 Ultra, mainly for unified memory.

I've only had it a couple of months, but so far it's proving its worth in the quality of LLM output, even quantized.

I generally run Qwen3-vl at 235b, at a Q4_K_M quantization level so that it fits, and it leaves me plenty of RAM for workstation tasks while delivering tokens at around 30tok/s

The smaller Qwen3 models (like qwen3-coder) I use in tandem, of course they run much faster and I tend to run them at higher quants up to Q8 for quality purposes.

The gigantic RAM's biggest boon, I've found, is letting me run the models with full context allocated, which lets me hand them larger and more complicated things than I could before. This alone makes the money I spent worth it, IMO.

I did manage to get glm-4.7 (a 358b model) running at a Q3 quantization level; it's delivery is adequate quality-wise, although it delivers at 15tok/s, though I did have to cut down to only 128k context to leave me enough room for the desktop.

If you get something this big, it's a powerhouse, but not nearly as much of a powerhouse as a dedicated nVidia GPU rig. The point is to be able to run them _adequately_, not at production speeds, to get your work done. I found price/performance/energy usage to be compelling at this level and I am very satisfied.

ryan-c 19 hours ago||
I'm using an M3 Ultra w/ 512GB of RAM, using LMStudio and mostly mlx models. It runs massive models with reasonable tokens per second, though prompt processing can be slow. It handles long conversations fine so long as the KV cache hits. It's usable with opencode and crush, though my main motivation for getting it was specifically to be able to process personal data (e.g. emails) privately, and to experiment freely with abliterated models for security research. Also, I appreciate being able to run it off solar power.

I'm still trying to figure out a good solution for fast external storage, I only went for 1TB internal which doesn't go very far with models that have hundreds of billions of parameters.

gneuron 8 hours ago|
This is the way brother
pcf 11 hours ago||
Below are my test results after running local LLMs on two machines.

I'm using LM Studio now for ease of use and simple logging/viewing of previous conversations. Later I'm gonna use my own custom local LLM system on the Mac Studio, probably orchestrated by LangChain and running models with llama.cpp.

My goal has all the time been to use them in ensembles in order to reduce model biases. The same principle has just now been introduced as a feature called "model council" in Perplexity Max: https://www.perplexity.ai/hub/blog/introducing-model-council

Chats will be stored in and recalled from a PostgreSQL database with extensions for vectors (pgvector) and graph (Apache AGE).

For both sets of tests below, MLX was used when available, but ultimately ran at almost the same speed as GGUF.

I hope this information helps someone!

/////////

Mac Studio M3 Ultra (default w/96 GB RAM, 1 TB SSD, 28C CPU, 60C GPU):

• Gemma 3 27B (Q4_K_M): ~30 tok/s, TTFT ~0.52 s

• GPT-OSS 20B: ~150 tok/s

• GPT-OSS 120B: ~23 tok/s, TTFT ~2.3 s

• Qwen3 14B (Q6_K): ~47 tok/s, TTFT ~0.35 s

(GPT-OSS quants and 20B TTFT info not available anymore)

//////////

MacBook Pro M1 Max 16.2" (64 GB RAM, 2 TB SSD, 10C CPU, 32C GPU):

• Gemma 3 1B (Q4_K): ~85.7 tok/s, TTFT ~0.39 s

• Gemma 3 27B (Q8_0): ~7.5 tok/s, TTFT ~3.11 s

• GPT-OSS 20B (8bit): ~38.4 tok/s, TTFT ~21.15 s

• LFM2 1.2B: ~119.9 tok/s, TTFT ~0.57 s

• LFM2 2.6B (Q6_K): ~69.3 tok/s, TTFT ~0.14 s

• Olmo 3 32B Think: ~11.0 tok/s, TTFT ~22.12 s

StevenNunez 1 day ago||
I do! I have an M3 Ultra with 512GB. A couple of opencode sessions running work well. Currently running GML 4.7 but was on Kimi K2.5. Both great. Excited for more efficiencies to make their way to LLMs in general.
circularfoyers 20 hours ago||
The prompt processing times I've heard about have put me off wanting to go that high with memory on the M series (hoping that changes for the M5 series though). What's the average and longest times you've had to wait when using opencode? Has any improvements to mlx helped in that regard?
pcf 11 hours ago|||
Wow, Kimi K2.5 runs on a single M3 Ultra with 512 GB RAM?

Can you share more info about quants or whatever is relevant? That's super interesting, since it's such a capable model.

satvikpendem 1 day ago|||
How's the inference speed? What was the price? I'm guessing you can fit the entire model without quantization?
UmYeahNo 1 day ago||
Excellent. Thanks for the info!
TomMasz 16 hours ago||
I've got an M2 Ultra with 64 GB, and I've been using gpt-oss-20b lately with good results. Performance and RAM usage have been reasonable for what I've been doing. I've been thinking of trying the newer Qwen 3 Coder Next just to see what it's like, though.
hnfong 13 hours ago||
I have a maxed out M3 Ultra. It runs quantized large open Chinese models pretty well. It's slow-ish, but since I don't use them very frequently, most of the time is waiting to the model to load from disk to RAM.

There are benchmarks on token generation speed out there for some of the large models. You can probably guess the speed for models you're interested in by comparing the sizes (mostly look at the active params).

Currently the main issue for M1-M4 is the prompt "preprocessing" speed. In practical terms, if you have a very long prompt, it's going to take a much long time to process it. IIRC it's due to lack of efficient matrix multiplication operations in the hardware, which I hear is rectified in the M5 architecture. So if you need to process long prompts, don't count on the Mac Studio, at least not with large models.

So in short, if your prompts are relatively short (eg. a couple thousand tokens at most), you need/want a large model, you don't need too much scale/speed, and you need to run inference locally, then Macs are a reasonable option.

For me personally, I got my M3 Ultra somewhat due to geopolitical issues. I'm barred from accessing some of the SOTA models from the US due to where I live, and sometimes the Chinese models are not conveniently accessible either. With the hardware, they can pry DeepSeek R1, Kimi-K2, etc. from my cold dead hands lol.

satvikpendem 1 day ago||
There are some people on r/LocalLlama using it [0]. Seems like the consensus is while it does have more unified RAM for running models, up to half a terabyte, the token generation speed can be fairly slow such that it might just be better to get an Nvidia or AMD machine.

[0] https://old.reddit.com/r/LocalLLaMA/search?q=mac+studio&rest...

UmYeahNo 1 day ago|
Thanks for the link. I'll take a look.
runjake 13 hours ago||
For anything other than a toy, I would recommend at least a Max processor and at least 32 GB memory, depending on what you're doing. I do a lot of text, audio, and NLP stuff, so I'm running smaller models and my 36GB is plenty.

Ultra processors are priced high enough, I'd be asking myself if I'm serious about local LLM work and do a cost analysis.

caterama 14 hours ago||
M3 Ultra with 256 GB memory, using GPT-OSS 120b in ollama. It’s decently fast, but makes the system somewhat unstable. Have to reboot frequently otherwise the GPU seems to flake (eg visual artifacts / glitches in other programs).
rlupi 19 hours ago|
I have an M3 Ultra 96 GB, it works reasonably well with something like qwen/qwen3-vl-30b (fast) or openai/gpt-oss-120b (slow-ish) or openai/gpt-oss-20b (fast, largest context). I keep the latter loaded, and have a cronjob that generates a new MOTD for my shell every 15 minutes with information gathered from various sources.
More comments...