Top
Best
New

Posted by cylo 1 day ago

Local AI needs to be the norm(unix.foo)
1636 points | 640 commentspage 9
mercurialsolo 18 hours ago|
Not your weights not your brain. Owning your own action and decision model is super important as these models emulate more of our decisions, thinking and learning. Built claudectl - a local brain for coding agents https://github.com/mercurialsolo/claudectl
imrozim 10 hours ago||
I use Claude api for my startup and the billings and rate limiting hurt. But local models cant do what i needed yet. Wish they could.
AuditMind 5 hours ago||
It's almost here. Look at the new Qwen 3.6 models. Solid stuff there.

It runs by now on 8GB Vram, so a Legion 5 for about 1500$ could be a good workhorse.

everlier 19 hours ago||
There was never a better time to run LLMs locally. It's just a few commands from zero till a fully working LLM homelab.

``` harbor pull unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_XL

# Open WebUI -> llama.cpp + SearXNG for Web RAG + OpenTerminal as sandbox harbor up searxng webui llamacpp openterminal ```

That's it, it's already better than Claude's or ChatGPT's app.

ramon156 11 hours ago||
GLM 5.1 is very impressive, I wouldn't be surprised if we get to a point where it can live in ~48Gb and have a reliable speed/quality
maxdo 8 hours ago||
The start of the argument is already broken . Ok , slapping api is bad , so you push api that mimics to your provider, install some Chinese llm that will never obey any lawsuit in your country , install 500 packages to do so , every of them has a potential risk a security issue . How is that better ?

Oh yeah , it feels independent and not lazy , sure

RyanZhuuuu 11 hours ago||
I’m skeptical that local AI will work well with today’s technology. Running capable models consumes too many resources on end-user devices.
FrasiertheLion 19 hours ago||
Overall I'm bullish on standardized local APIs that ship with the browser or platform. Far more tractable than expecting end users to stand up their own local model instances, though r/LocalLLaMA is a fantastic community to follow if you want to go that route.

A useful framing over “local vs cloud AI” can be split along two axes: does the task touch private data, and does it need frontier intelligence? You can use frontier models for developing the software (doesn’t touch data), but open-source models running locally for ops: maintenance, debugging and monitoring (touches data). If you need to fall back to frontier intelligence at some point for a particularly hard to resolve problem, you can still rely on local models for pre-transforming and filtering input in a way that's privacy-preserving or satisfies some constraint before it’s sent off to the cloud for processing. OpenAI's privacy filter is a good example of a model that can be used to mask PII and secrets and that can run locally: https://openai.com/index/introducing-openai-privacy-filter/, before sending any data externally for processing.

Another framing for local vs frontier closed which the article mentions is whether the task saturates model capability. With certain tasks like PDF processing or voice or summarization, adding more intelligence isn't necessarily useful. Arguably we've approached that point for chat interfaces already with frontier open-source models. But for coding and ops through well structured tool use inside a coding capable harness, we're still a ways away.

Tangentially, a contrarian take here is that AI can actually enable more privacy preserving software if you’re so inclined. You can just build personalized software and it lowers the barrier to entry and the effort required to self host. SaaS complexity often comes from scaling and supporting features for all types of customers, and if you're building software for personal use, you don't need all that additional complexity. Additionally, foundational and infra software that is harder to vibecode with AI is often already open source.

msteffen 21 hours ago||
> One of the current trends in modern software is for developers to slap an API call to OpenAI or Anthropic for features within their app.

Well there’s your problem, control needs to go the other way. If you want your app to be AI-enabled, you need to make it easy for AI to control your app. Have you used OpenClaw? It’s awesome!

osjxjsjxjs 3 hours ago|
No AI needs to be the norm. Again.
More comments...