Posted by samuel246 7/1/2025
Which is why solving the "I want my data in fast storage as often as possible" problem may be counter-productive on the whole: you ain't the only client of the system; let it breath and server requests from others.
One bit of interesting trivia say for facebook (from memory): if you add all the RAM caches in redis/memcached/disk + db caches to make the thing work at scale, then for about 20-30% more memory you could've had the whole thing in memory 100% of the time.
Obviously things grow - look who i mentioned afterall.
The focus is on how you handle growth.
Oof you're trying so hard you could cut diamond with that line.
Author is talking about the least interesting, and easiest, piece of the overall caching problem.
(Although a materialised view is more like an index than a cache. The view won't expire requiring you to rebuild.)
In RDBMS contexts, index really is a caching mechanism (a cache) managed by the database system (query planner needs to decide when it's best to use one index or another).
But as you note yourself even in these cases where you've got cache management bundled with the database, having too many can slow down (even deadlock) writes so much as the database tries to ensure consistency between these redundant data storage elements.
In some sense though. If it ain't L1 it's storage :)
Even if you use "cache" in the name (eg. memcached), that's still not a cache, even if it's a KV store designed for caching.