Top
Best
New

Posted by stevekrouse 4/14/2025

A hackable AI assistant using a single SQLite table and a handful of cron jobs(www.geoffreylitt.com)
800 points | 174 commentspage 2
didip 4/14/2025|
So… I have a number of questions:

1. How did he tell Claude to “update” based on the notebook entries?

2. Won’t he eventually ran out of context window?

3. Won’t this be expensive when using hosted solutions? For just personal hacking, why not simply use ollama + your favorite model?

4. If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?

I can totally imagine using pgai for the notebook logs feature and local ollama + deepseek for the inference.

The email idea mentioned by other commenters is brilliant. But I don’t think you need a new mailbox, just pull from Gmail and grep if sender and receiver is yourself (aka the self tag).

Thank you for sharing, OP’s project is something I have been thinking for a few months now.

simonw 4/14/2025|
> Won’t he eventually ran out of context window?

The "memories" table has a date column which is used to record the data when the information is relevant. The prompt can then be fed just information for today and the next few days - which will always be tiny.

It's possible to save "memories" that are always included in the prompt, but even those will add up to not a lot of tokens over time.

> Won’t this be expensive when using hosted solutions?

You may be under-estimating how absurdly cheap hosted LLMs are these days. Most prompts against most models cost a fraction of a single cent, even for tens of thousands of tokens. Play around with my LLM pricing calculator for an illustration of that: https://tools.simonwillison.net/llm-prices

> If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?

Geoffrey's design is so simple it doesn't even need search - all it does is dump in context that's been stamped with a date, and there are so few tokens there's no need for FTS or vector search. If you wanted to build something more sophisticated you could absolutely use those. SQLite has surprisingly capable FTS built in and there are extensions like https://github.com/asg017/sqlite-vec for doing things with vectors.

datadrivenangel 4/14/2025|||
SQLite + sqlite-vec/DuckDB for small agents is going to be a very powerful combination.

Do we even need to think of these as agents, or will the agentic frameworks move towrads being a call_llm() sql function?

jonahx 4/14/2025|||
Just want to say I appreciate your posts here on HN and on your blog about AI/LLMs.
theptip 4/14/2025||
This is fun! I think this sort of tooling is going to be very fertile ground for hackers over the next few years.

Large swathes of the stack is commoditized OSS plumbing, and hosted inference is already cheap and easy.

There are obvious security issues with plugging an agent into your email and calendar, but I think many will find it preferable to control the whole stack rather than ceding control to Apple or Google.

ForOldHack 4/14/2025|
So we can just send him self deleting emails to mine crypto for us? How convienent.

"There are obivious security issues with plugging and agent into your email..." Isn't this how North Korea makes all their crypto happen?

eitland 4/14/2025||
> It’s rudimentary, but already more useful to me than Siri!

For me, that is an extremely low barrier to cross.

I find Siri useful for exactly two things at the moment: setting timers and calling people while I am driving.

For these two things it is really useful, but even in these niches, when it comes to calling people, despite it having been around me for years now it insist on stupid things like telling me there is no Theresa in my contacts when I ask it to call Therese.

That said what I really want is a reliable system I can trust with calendar acccess and that is possible to discuss with, ideally voice based.

protocolture 4/15/2025||
I went through this weird experience with Cortana on WP7, where I found it incredibly useful to begin with, and then over time it got worse. It seemed like it was created by some incredibly talented engineers. I used it to make calls while driving, set the GPS and search for information while I drove. But over time, it seemed to change behaviour and started ignoring my commands, and when it did accept them, it seemed to refer me to paid advertisers. And considering bing wasnt even as popular as it is now, 10 years ago, a paid advertiser could be 100km away.

Which I think is a path that people haven't considered with LLMs. We are expecting them to get better forever, but once we start using them, their legs will be cut out to force them to feed us advertising.

jkestner 4/14/2025|||
I've had the same issues of decay. I used to be able to say "call Mom" but now it will call some kid's mom who I have in Contacts as "[some kid's] mom". What is the underlying architecture that simple heuristic things like this can get worse? Are they gradually slipping in AI?
theshrike79 4/16/2025||
I always say "call my mom" and it'll pick it up from contacts -> relationships correctly
actionfromafar 4/14/2025||
Clearly you need to make some slight spelling changes to your contacts... ;)
collingreen 4/17/2025||
I feel like the standard Apple response is "if it isn't working correctly you just aren't using it right"

I still regularly experience a bug where my mac sends sound to speakers instead of a plugged in headphone jack after waking up from sleep. 10 years ago when I first looked into it the official Apple response was "that's not possible with the hardware" and we haven't made any progress since. Gaslighting as a service I guess.

Luckily I can just unplug and plug back in. Maybe they can bring the great Apple minds together to make my iPhone stop blasting an alarm in my ear at regular volume if I happen to be talking on the phone when it goes off (issue since my very first iPhone 3).

echion 4/19/2025||
> iPhone [...] blasting an alarm in my ear at regular volume if I happen to be talking on the phone when [the alarm] goes off

This always gets me...is there not a public bug report for this one?

0xbadcafebee 4/14/2025||
Hmm, there's supposed to be a Tasks [reminders] feature in ChatGPT, but it's in beta (I don't have access to it). Whenever it gets released, you could make some kind of "router" that connects to different communication methods and connect that up to ChatGPT statefully, and you could just "speak"/type to ChatGPT from anywhere, and it would send you reminders. No need for all the extra logic, cron jobs, or SQLite table (ChatGPT has memory across chats).
hwpythonner 4/14/2025||
Very cool. I’m wondering if you’ve thought about memory pruning or summarization as usage grows?

What do you think of this: instead of just deleting old entries, you could either do LRU (I guess Claude can help with it), or you could summarize the responses and store the summary back into the same table — kind of like memory consolidation. That way raw data fades, but a compressed version sticks around. Might be a nice way to keep memory lightweight while preserving context.

simianwords 4/14/2025||
I have built something similar that runs without a server. It required just a few lines in Apple shortcuts.

TL;DR I made shortcuts that work on my Apple watch directly to record my voice, transcribe it and store my daily logs on a Notion DB.

All you need are 1) a chatgpt API key and 2) a Notion account (free).

- I made one shortcut in my iPhone to record my voice, use whisper model to transcribe it (done locally using a POST request) and send this transcription to my Notion database (again a POST request on shortcuts)

- I made another shortcut that records my voice, transcribes and reads data from my Notion database to answer questions based on what exists in it. It puts all data from db into the context to answer -- costs a lot but simple and works well.

The best part is -- this workflow works without my iPhone and directly on my Apple Watch. It uses POST requests internally so no need of hosting a server. And Notion API happens to be free for this kind of a use case.

I like logging my day to day activities with just using Siri on my watch and possibly getting insights based on them. Honestly the whisper model is what makes it work because the accuracy is miles ahead of the local transcription model.

kaonwarb 4/14/2025|
Nice. Can you share?
simianwords 4/14/2025||
I'll plan to do it at some point -- at this moment I have hardcoded my credentials into the shortcut so it's a bit hard to share without tweaking. I didn't bother detailing it because its sort of simple. I think the idea is key here and anyone with a few hours to kill can get something working.

On second thought -- apple shortcuts is really brittle. It breaks in non obvious ways and a lot can only be learned by trial and error lol

Edit: I just wrote up something quick https://simianwords.bearblog.dev/how-i-use-my-apple-watch-to...

fedeb95 4/15/2025||
The title is a bit misleading since it relies on Claude API to function.
paulnovacovici 4/14/2025||
Curious, how come you decided to use a cloud solution instead of hosting this on a home server? I’ve recently bought a mini PC for small projects like this and have been loving being able to host with no cost associated to it. Albeit it’s probably still incredibly cheap to use a IaaS or PaaS but still a barrier to entry for random projects I want to work on a weekend
simonw 4/14/2025||
Val Town has a free tier that's easily enough to run this project: https://www.val.town/pricing

I'd use a hosted platform for this kind of thing myself, because then there's less for me to have to worry about. I have dozens of little systems running in GitHub Actions right now just to save me from having to maintain a machine with a crontab.

bobnamob 4/15/2025|||
A single cloudflare durable object (sqlite db + serverless compute + cron triggers) would be enough to run this project. DOs have been added to CFs free tier recently - you could probably run a couple hundred (maybe thousands) instances of Stevens without paying a cent, aside from Claude costs ofc
lnenad 4/14/2025||
> host with no cost associated to it

Home server AI is orders of magnitude more costly than heavily subsidized cloud based ones for this use case unless you run toy models that might hallucinate meetings.

edit: I now realize you're talking about the non-ai related functionality.

drog 4/14/2025||
I've been using my own telegram -> ai bot and its very interesting to see what others do with the similar interface.

I have not thought about adding memory log of all current things and feeding it into the context I'll try it out.

Mine is a simple stateless thing that captures messages, voice memos and creates task entries in my org mode file with actionable items. I only feed current date to the context.

Its pretty amusing to see how it sometimes adds a little bit of its own personality to simple tasks, for example if one of my tasks are phrased as a question it will often try to answer the question in the task description.

kylecazar 4/14/2025|
I like the idea of parsing USPS Informed Delivery emails (a lot of people I encounter still don't know that this service exists). Maybe I'll make something to alert me when my checks are finally arriving!
philsnow 4/15/2025|
This part was galling to me; somewhere in the USPS, the data about what mailpieces/packages are arriving soon exist in a very concise form, and they templatize an email and send it to me, after which I can parse the email with simple+brittle regexes or forward the emails to a relatively (environmentally-)expensive LLM or so.... but if they'd made the information available with an API or RSS feed, or attached the json payload to the email in the first place, I could get away without parsing.
kylecazar 4/15/2025|||
It would indeed be nice to have a recipient/consumer-side API!

I don't think it'll ever happen. Really the only valid use-case would be for people to hack together something for themselves (like we are discussing)... They don't want to allow developers to create applications on top of this as a 3rd party, as informed delivery itself has to carefully navigate privacy laws and it could be disastrous.

theshrike79 4/16/2025|||
In Finland our own USPS (Posti) has their own mobile application that does delivery notifications. They've been directing users towards the app pretty heavily and deprecating other interfaces.

You can still get the parcel ID and use a public-ish web API to get tracking information on a rough level ("in transit", "being delivered") without exact address information.

More comments...