Top
Best
New

Posted by franze 2 days ago

Show HN: Apfel – The free AI already on your Mac(apfel.franzai.com)
Github: https://github.com/Arthur-Ficial/apfel
623 points | 136 commentspage 3
EddieLomax 1 day ago|
This is similar to something I was playing around with last month-- basically just a CLI for accessing the foundational models.

https://github.com/ehamiter/afm

It's really handy for quick things like "what's the capital of country x" but for coding, I feel that it is severely limited. With such a small context it's (currently) not great for complicated things.

VanTodi 1 day ago||
Just a small thing about the website: your examples shift all the elements below it on mobile when changing, making it jump randomly when trying to read.
arendtio 2 days ago||
For those who don't know, 'Apfel' is the German word for Apple.
gherkinnn 1 day ago|
And for those who did know that and want to know more, the shift from apple - apfel and water -> wasser happened during the High German consonant shift.

https://en.wikipedia.org/wiki/High_German_consonant_shift

millionclicks 1 day ago||
Awesome idea. You should launch this on Buildfeed.co.
swiftcoder 2 days ago||
Anyone tried using this as a sub-agent for a more capable model like Claude/Codex?
khalic 2 days ago||
If you’re looking into small models for tiny local tasks, you should try Qwen coder 0,5B. It’s more of an experiment, but it can output decent functions given the right context instructions.
xenophonf 1 day ago||
> [Qwen coder 0,5B] can output decent functions given the right context instructions

Can you share a working example?

khalic 1 day ago||
So… a prompt? I’m not on my laptop but I hooked it to cmp.nvim, gave it a short situational prompt, +- 10 lines, and started typing. Not anywhere near usable but with a little effort you can get something ok for repetitive tasks. Maybe something like spotting one specific code smell pattern. The advantage is the ridiculous T/s you get
LatencyKills 2 days ago|||
The combined (input/output) context window length is 4K. Claude would blow through that even when trying to read and summarize a small file.
knocte 1 day ago||
With a small/minimalistic harness like Pi maybe it works well?
coredog64 1 day ago|||
I was thinking about the other way: Could you use this in front of Claude to summarize inputs and so reduce your token counts?
franze 2 days ago||
project started with

trying to run openclaw with it in ultra token saving mode, did totally not work.

great for shell scripts though (my major use case now)

nose-wuzzy-pad 1 day ago||
Does the local LLM have access to personal information from the Apple account associated with the logged-in user? Maybe through a RAG pipeline or similar? Just curious if there are any risks associated with exposing this in a way that could be exploited via CORS or through another rogue app querying it locally.
franze 1 day ago|
no. the on device foundationmodels framework that apfel uses does not have access to personal information from the apple account. the model is a bare language model with no built in personal data access.

apple does have an on device rag pipeline called the semantic index that feeds personal data like contacts emails calendar and photos into the model context but this is only available to apples own first party features like siri and system summaries.

it is not exposed through the foundationmodels api.

divan 1 day ago||
What's the easiest way to use it with on-device voice model for voice chat?
windsurfer 1 day ago||
https://github.com/Arthur-Ficial/apfel-gui uses on-device speech-to-text and text-to-speech
divan 1 day ago||
Thanks, tried it, but it's crashes on clicking the microphone icon. Default `make install` for some reason tries to install it to /usr, I changed that and after torturing more mature coding LLMs for 20 minutes, made it running with mic/sound.

The mic button requires clicking to transcribe and start listening again, and default voice is low-quality (I assume it can be configured).

In general I'm looking for a way to try the on-device hands-free voice mode.

contingencies 1 day ago||
https://handy.computer
Oras 2 days ago||
I like the idea and the clarity to explain the usage, my question would be: what kind of tasks it would be useful for?
khalic 2 days ago|
Making a sentence out of a json
mark_l_watson 1 day ago|
I have been using Apple’s built-in system LLM model for the last 7 or 8 months. I like the feature that if it needs to, it occasionally uses a more powerful secure private cloud model. I also write my own app to wrap it.
More comments...