This has been the case for way longer than openAI and Anthropic has been around with services like AWS, Cloudflare, etc.
Great observation! Often the excitement of novelty makes us lose sight of the real goal
Based on what I understand about how the former works, I would assume that the latter has the same properties and failure modes.
* What is the answer to local AI for native apps on Windows?
* What is the answer to local AI for Linux?
This is a big opportunity for Linux, given the high quality of open-weight models. I hope some answer emerges before designs fracture and we get a dozen mutually incompatible answers.
run an ai api endpoint on a unix domain socket
It runs by now on 8GB Vram, so a Legion 5 for about 1500$ could be a good workhorse.
Is there a solution for this? I'm currently just making users download onnx models if they want a feature, but it's not smooth UX
- and for the web / javascript / svelte applications?
- suggestions for local OCR for bulk images?