Posted by turtles3 14 hours ago
But for non relational data, I prefer something simpler depending on what the requirements are.
Commenters here are talking "modern tools" and complex systems. But I am thinking of common simpler cases where I have seen so many people reach for a relational database from habit.
For large data sets there are plenty of key/value stores to choose from, for small (less than a mega byte) data then a CSV file will often work best. Scanning is quicker than indexing for surprisingly large data sets.
And so much simpler
Supabase helps when building a webapp. But Postgres is the powerhouse.
Look. In a PostgreSQL extension, I can't:
1. extend the SQL language with ergonomic syntax for my use-case,
2. teach the query planner to understand execution strategies that can't be made to look PostgreSQL's tuple-and-index execution model, or
3. extend the type system to plumb new kinds of metadata through the whole query and storage system via some extensible IR.
(Plus, embedded PostgreSQL still isn't a first class thing.)
Until PostgreSQL's extension mechanism is powerful enough for me to literally implement DuckDB as an extension, PostgreSQL is not a panacea. It's a good system, but nowhere near universal.
Now, once I can do DuckDB (including its language extensions) in PostgreSQL, and once I can use the thing as a library, let's talk.
(pg_duckdb doesn't count. It's a switch, not a unified engine.)
Postgres won as the starting point again thanks to Supabase.
eg Python, react... very little OCaml, Haskell, etc.
Im also curious about benchmark results.