Posted by lairv 10 hours ago
It seems to me there is no chance local ML is going to be anywhere out of the toy status comparing to closed source ones in short term
a) to have an idea how much tokens I use and
b) be independent of VC financed token machines and
c) I can use it on a plane/train
Also I never have to wait in a queue, nor will I be told to wait for a few hours. And I get many answers in a second.I don't do full vibe coding with a dozen agents though. I read all the code it produces and guide it where necessary.
Last not least, at some point the VC funded party will be over and when this happens one better knows how to be highly efficient in AI token use.
I did use candle for wasm based inference for teaching purposes - that was reasonably painless and pretty nice.
How can I realistically get involved the AI development space? I feel left out with what’s going on and living in a bubble where AI is forced into by my employer to make use of it (GitHub Copilot), what is a realistic road map to kinda slowly get into AI development, whatever that means
My background is full stack development in Java and React, albeit development is slow.
I’ve only messed with AI on very application side, created a local chat bot for demo purposes to understand what RAG is about to running models locally. But all of this is very superficial and I feel I’m not in the deep with what AI is about. I get I’m too ‘late’ to be on the side of building the next frontier model and makes no sense, what else can I do?
I know Python, next step is maybe do ‘LLM from scratch”? Or I pick up Google machine learning crash course certificate? Or do recently released Nvidia Certification?
I’m open for suggestions
Hopefully this does not mean consolidation due to resource dry up but true fusion of the bests.
Ollama and webui seem to rapidly lose their charm. Ollama now includes cloud apis which makes no sense as a local.