Top
Best
New

Posted by tanelpoder 3 hours ago

Keeping a Postgres Queue Healthy(planetscale.com)
43 points | 8 comments
cataflutter 10 minutes ago|
Decent article, but some remarks:

1) It seems these two statements conflict with each other:

> The oldest such transaction sets the cutoff—referred to as the "MVCC horizon." Until that transaction completes, every dead tuple newer than its snapshot is retained.

and

> For example, imagine three analytics queries, each running for 40 seconds, staggered 20 seconds apart. No individual query would trigger a timeout for running too long. But because one is always active, the horizon never advances, and the effect on vacuum is the same as one transaction that never ends.

If the three analytics *transactions* (it's transactions that matter, not queries, although there is some subtlety around deferred transactions not acquiring a snapshot until the first query) are started at different times, they will have staggered snapshots and so once the first completes, this should allow the vacuum to advance.

2) Although the problem about this query:

    SELECT * FROM jobs
    WHERE status = 'pending'
    ORDER BY run_at
    LIMIT 1
    FOR UPDATE SKIP LOCKED;
having to consider dead tuples is a genuine concern and performance problem, this can also be mitigated by adding a monotonically increasing column and adding a `WHERE column < ?` clause, provided you have also added an index to make that pagination efficient. This way you don't need to consider dead tuples and they 'only' waste space whilst waiting to be vacuumed, rather than also bogging down read perf.

There is a little subtlety around how you guarantee that the column is monotonically increasing, given concurrent writers, but the answer to that depends on what tricks you can fit into your application.

3) I almost want to say that the one-line summary is 'Don't combine (very) long-running transactions with (very) high transaction rates in Postgres'

(Is this a fair representation?)

sebmellen 27 minutes ago||
Postgres can do so much. I see people choose Kafka and SQS for things that Graphile Worker could do all day long.
mikeocool 9 minutes ago|
“Use Postgres for everything” is a great philosophy at low/medium scale to keep things simple, but there comes a scaling point where I want my SQL database doing as little possible.

It’s basically always the bottleneck/problem source in a lot of systems.

tibbar 4 minutes ago||
Yes. For example you'll typically have a "budget" of 1-10k writes/sec. And a single heavy join can essentially take you offline. Even relatively modest enterprises typically need to shift some query patterns to OLAP/nosql/redis/etc. before very long.
simeonGriggs 53 minutes ago||
Yo! Author here, I’ll be around if anyone’s got questions!
jeeybee 33 minutes ago|
Did you test with fillfactor < 100 on the queue table? With HOT updates, status changes can reuse dead space without creating new index entries, which seems like it could significantly delay the onset of the death spiral?
EffCompute 14 minutes ago||
[dead]
richwater 1 hour ago|
It would be nice if this ad at least explained a little bit of the technical side of the solution.
sgarland 52 minutes ago|
It sounds vaguely like InnoDB’s concurrency control solution which uses tokens [0] as a unit of maximum work a query can perform.

0: https://dev.mysql.com/doc/refman/8.4/en/innodb-performance-t...