Top
Best
New

Posted by cylo 22 hours ago

Local AI needs to be the norm(unix.foo)
1486 points | 579 commentspage 6
khoury 6 hours ago|
Agree with the sentiment, but: "We are building applications that stop working the moment the server crashes or a credit card expires."

This has been the case for way longer than openAI and Anthropic has been around with services like AWS, Cloudflare, etc.

z3t4 8 hours ago||
We are experimenting with local LLM and opencode at work and the quality is not as good as Claude code et.al but it's not far off and local speed is actually faster. We got 3 of Nvidias latest AI GPU's which was not cheep. It's not good enough to train our own models, but we can run the biggest open models with some tweaking.
imnes 12 hours ago||
I'm going through a similar exercise right now in an app I'm building. No server dependencies, for features that have traditionally used server side APIs, moving those capabilities onto the device. And also utilizing the on-board AI features provided by Android and iOS. So far it's been a very positive experience, and the capabilities provided on these devices have been more than capable for my needs. Working on providing apps that don't have ongoing operation costs of running server side infrastructure, so I can offer them as "pay once, run it forever" instead of ongoing subscription costs for the user.
andychiare 5 hours ago||
> “AI everywhere” is not the goal. Useful software is the goal.

Great observation! Often the excitement of novelty makes us lose sight of the real goal

reshef316 7 hours ago||
not saying i disagree with the general statement, but there need to be options, not everyone has a machine capable of doing the same type of lifting required to properly run a local version. so what, if my machine is older i'll be locked out? restricted? forced to pay?
dgb23 7 hours ago||
I‘m surpised at the presented dichotomy between JSON formatting and what the Apple SDK provides to parse output into structs.

Based on what I understand about how the former works, I would assume that the latter has the same properties and failure modes.

hackyhacky 18 hours ago||
I would like a standardized API for local AI to exist outside of the Apple ecosystem. The Prompt API is Chrome is halfway there.

* What is the answer to local AI for native apps on Windows?

* What is the answer to local AI for Linux?

This is a big opportunity for Linux, given the high quality of open-weight models. I hope some answer emerges before designs fracture and we get a dozen mutually incompatible answers.

franze 18 hours ago||
i researched that question for apfel https://github.com/Arthur-Ficial/apfel and standardized API is openai api so thats what i went with
hackyhacky 17 hours ago||
OpenAI's API is not local AI.
zozbot234 17 hours ago||
Most local AI servers expose that API.
teravor 17 hours ago||
> What is the answer to local AI for Linux?

run an ai api endpoint on a unix domain socket

AuditMind 3 hours ago||
It's almost here. Look at the new Qwen 3.6 models. Solid stuff there.

It runs by now on 8GB Vram, so a Legion 5 for about 1500$ could be a good workhorse.

deivid 16 hours ago||
Sounds great, but if you din't cave to apple/google (eg: graphene, lineage), models are not built-in. Every app needs to ship their own models, and they are not tiny.

Is there a solution for this? I'm currently just making users download onnx models if they want a feature, but it's not smooth UX

vivzkestrel 10 hours ago|
- can we get suggestions from people on what would the equivalent for android

- and for the web / javascript / svelte applications?

- suggestions for local OCR for bulk images?

kajman 8 hours ago|
I hope there's no web equivalent for a while. I usually hate app lock-in, but any hasty API for this is going to be a DoS or fingerprinting nightmare.
More comments...