Finally! I've been waiting for something like this.
ThrowawayTestr 13 hours ago||
For image generation or even video generation, local models are totally feasible. I can generate a 5 second clip with wan 2.2 in about 30 minutes on my 3060 12G. Plus, I have full control on the loras used.
S4phyre 17 hours ago||
Oh how cool. Always wanted to have a tool like this.
ipunchghosts 14 hours ago||
What is S? Also, NVIDIA RTX 4500 Ada is missing.
tristor 15 hours ago||
This does not seem accurate based on my recently received M5 Max 128GB MBP. I think there's some estimates/guesswork involved, and it's also discounting that you can move the memory divider on Unified Memory devices like Apple Silicon and AMD AI Max 395+.
brcmthrowaway 16 hours ago||
If anyone hasn't tried Qwen3.5 on Apple Silicon, I highly suggest you to! Claude level performance on local hardware. If the Qwen team didn't get fired, I would be bullish on Local LLM.
kylehotchkiss 16 hours ago||
My Mac mini rocks qwen2.5 14b at a lightning fast 11/tokens a second. Which is actually good enough for the long term data processing I make it spend all day doing. It doesn’t lock up the machine or prevent its primary purpose as webserver from being fulfilled.
nilslindemann 16 hours ago||
1. More title attributes please ("S 16 A 7 B 7 C 0 D 4 F 34", huh?)
2. Add a 150% size bonus to your site.
Otherwise, cool site, bookmarked.
Akuehne 12 hours ago||
Can we get some of the ancient Nvidia Teslas, like the p40 added?