Posted by HunOL 11/1/2025
We're having troubles with memory usage when using SQLite in-memory DBs with "a lot" of inserts and deletes. Like maybe inserting up to a 100k rows in 5 minutes, deleting them all after 5 minutes, and doing this for days on end. We see memory usage slowly creeping up over hours/days when doing that.
Any settings that would help with that? It's particularly bad on macOS, we've had instances where we reached 1GB of memory usage according to Activity Monitor after a week or so.
However... what you (and OP) are looking for might be pragma shrink_memory [1].
A million years ago, back when I still used Emby, I was annoyed that I couldn't use it across multiple in Docker Swarm due to locking of SQLite. It really annoyed me, enough to where I started (but never completed) a driver to change the DB to postgres [1]. I ended up moving everything over to a single server, which is mostly fine unless I have multiple people transcoding at the same time.
If this is actually fixed then I might have an excuse to rearchitect my home server setup again.
I am using it to loop through a database of 11,000 words, hit an HTTP API for each (ChatGPT) and generate example sentences for the word. I would love to be able to asynchronously launch these API calls and have them come back and update the database row when ready, but not sure how to handle the database getting hit by all these writes from (as I understand it) multiple instances of the same Python program/function.
By default SQLite will not do what you want out of the box. You have to turn on some feature flags(PRAGMA) to get it to behave for you. You need WAL mode, etc read:
* https://kerkour.com/sqlite-for-servers * https://zeroclarkthirty.com/2024-10-19-sqlite-database-is-lo...
My larger question is why multiprocessing? this looks like an IO heavy workload, not CPU bound, so python asyncio or python threads would probably do you better.
multiprocessing is when your resource hog is CPU(probably 1 python process per CPU), not IO bound.
What you're describing sounds like it would work fine to me. The blog post is misleading imho - it implies that SQLite doesn't handle concurrency at all. In reality, you can perform a bunch of writes in parallel and SQLite will handle running them one after the other internally. This works across applications and processes, you just need to use SQLite to interact with the database. The blog post is also misleading when it implies that the application has to manage access to the database file in some way.
Yes, it's correct that only one of those writes will execute at a time but it's not like you have to account for that in your code, especially in a batch-style process like you're describing. In your Python code, you'll just update a row and it will look like that happens concurrently with other updates.
I'll bet that your call to ChatGPT will take far longer than updating the row, even accounting for time when the write is waiting for its turn in SQLite.
Use WAL-mode for the best performance (and to reduce SQLITE_BUSY errors).
I will look into WAL mode. I am enjoying using SQLite (and aware that its not the solution for everything), and have several upcoming tasks which I'm planning to use async stuff - and yes, trying to find the balance between how to handle those async tasks (Networky HTTP calls being different than running `ffmpeg` locally).
If one thread is writing another thread tries to write, the first thread will have the file write lock, and the second thread will wait to write until that lock is released.
I've written code using the pattern you describe and it's totally fine.
You can't. You have a single writer - it's one of the many reasons sqlite is terrible for serious work.
You'll need a multiprocessing Queue and a writer that picks off sentences one by one and commits it.
What do you consider "serious" work? We've served a SaaS product from SQLite (roughly 300-500 queries per second at peak) for several years without much pain. Plus, it's not like PG and MySQL are pain-free, either - they all have their quirks.
I mean it's not if he's got lock contention from BUSY signals, now is it, as he implies. Much of his issues will stem from transactions blocking each other; maybe they are long-lived, maybe they are not. And those 3-500 queries --- are they writes or reads? Because reads is not a problem.