Top
Best
New

Posted by ricardbejarano 18 hours ago

Can I run AI locally?(www.canirun.ai)
1069 points | 274 commentspage 8
golem14 12 hours ago|
Has anyone actually built anything with this tool?

The website says that code export is not working yet.

That’s a very strange way to advertise yourself.

vednig 10 hours ago||
Our work at DoShare is a lot of this stuff we've been on it for 2 years
bearjaws 11 hours ago||
So many people have vibe coded these websites, they are posted to Reddit near daily.
arjie 14 hours ago||
Cool website. The one that I'd really like to see there is the RTX 6000 Pro Blackwell 96 GB, though.
amelius 13 hours ago||
What is this S/A/B/C/etc. ranking? Is anyone else using it?
bitexploder 5 hours ago||
Common in gaming culture. Kind of a meme template. S tier is the best tier of something. People make tier lists of all sorts of things with that grading.
relaxing 13 hours ago|||
Apparently S being a level above A comes from Japanese grading. I’ve been confused by that, too.
swiftcoder 13 hours ago||
It's very common in Japanese-developed video games as well
vikramkr 13 hours ago||
Just a tier list I think
sand500 8 hours ago||
How does it have details for M4 ultra?
jrmg 13 hours ago||
Is there a reliable guide somewhere to setting up local AI for coding (please don’t say ‘just Google it’ - that just results in a morass of AI slop/SEO pages with out of date, non-self-consistent, incorrect or impossible instructions).

I’d like to be able to use a local model (which one?) to power Copilot in vscode, and run coding agent(s) (not general purpose OpenClaw-like agents) on my M2 MacBook. I know it’ll be slow.

I suspect this is actually fairly easy to set up - if you know how.

kristianp 3 hours ago||
https://github.com/ggml-org/llama.cpp/releases - has mac binaries

https://unsloth.ai/docs/models/qwen3.5 - running locally guide for the Qwen 3.5 family of models, which have a range of different sizes.

thexa4 7 hours ago|||
I've created a llama.cpp integration with Copilot in vscode. The extension readme contains setup instructions: https://marketplace.visualstudio.com/items?itemName=delft-so...
AstroBen 13 hours ago|||
Ollama or LM Studio are very simple to setup.

You're probably not going to get anything working well as an agent on an M2 MacBook, but smaller models do surprisingly well for focused autocomplete. Maybe the Qwen3.5 9B model would run decently on your system?

jrmg 12 hours ago||
Right - setting up LM studio is not hard. But how do I connect LM Studio to Copilot, or set up an agent?
NortySpock 12 hours ago|||
I tried the Zed editor and it picked up Ollama with almost no fiddling, so that has allowed me to run Qwen3.5:9B just by tweaking the ollama settings (which had a few dumb defaults, I thought, like assuming I wanted to run 3 LLMs in parallel, initially disabling Flash Attention, and having a very short context window...).

Having a second pair of "eyes" to read a log error and dig into relevant code is super handy for getting ideas flowing.

AstroBen 12 hours ago||||
It looks like Copilot has direct support for Ollama if you're willing to set that up: https://docs.ollama.com/integrations/vscode

For LM Studio under server settings you can start a local server that has an OpenAI-compatible API. You'd need to point Copilot to that. I don't use Copilot so not sure of the exact steps there

brcmthrowaway 12 hours ago|||
Basically LM Studio has a server that serves models over HTTP (localhost). Configure/enable the server and connect OpenCode to it.

Try this article https://advanced-stack.com/fields-notes/qwen35-opencode-lm-s...

I'm looking for an alternative to OpenCode though, I can barely see the UI.

AstroBen 12 hours ago||
Codex also supports configuring an alternative API for the model, you could try that: https://unsloth.ai/docs/basics/codex#openai-codex-cli-tutori...
randusername 9 hours ago|||
Personally I'd start with llamafile [0] then move to compiling your own llama.cpp.

It's not as bad as you might think to compile llama.cpp for your target architecture and spin up an OpenAI compatible API endpoint. It even downloads the models for you.

[0]: https://github.com/mozilla-ai/llamafile

chatmasta 12 hours ago||
Any time I google something on this topic, the results are useful but also out of date, because this space is moving so absurdly fast.
lagrange77 11 hours ago||
Finally! I've been waiting for something like this.
amelius 13 hours ago||
Why isn't there some kind of benchmark score in the list?
nicklo 9 hours ago|
the animation of the model name text when opening the detail view is so smooth and delightful
More comments...