Top
Best
New

Posted by enether 10/29/2025

Kafka is Fast – I'll use Postgres(topicpartition.io)
559 points | 392 commentspage 4
jeeybee 7 days ago|
If you like the “use Postgres until it breaks” approach, there’s a middle ground between hand-rolling and running Kafka/Redis/Rabbit: PGQueuer.

PGQueuer is a small Python library that turns Postgres into a durable job queue using the same primitives discussed here — `FOR UPDATE SKIP LOCKED` for safe concurrent dequeue and `LISTEN/NOTIFY` to wake workers without tight polling. It’s for background jobs (not a Kafka replacement), and it shines when your app already depends on Postgres.

Nice-to-haves without extra infra: per-entrypoint concurrency limits, retries/backoff, scheduling (cron-like), graceful shutdown, simple CLI install/migrations. If/when you truly outgrow it, you can move to Kafka with a clearer picture of your needs.

Repo: https://github.com/janbjorge/pgqueuer

Disclosure: I maintain PGQueuer.

Nifty3929 7 days ago||
I do agree that too often folks are looking for the cool new widget and looking to apply it to every problem, with fancy new "modernized" architectures and such. And Postgres is great for so much.

But I think an important point to those in camp 2 (the good guys in TFA's narrative) is to use tools for problems they were designed to solve. Postgres was not designed to be a pub-sub tool. Kafka was. Don't try to build your own pub-sub solution on top of Postgres, just use one of the products that was built for that job.

Another distressing trend I see is for every product to try to be everything to everyone. I do not need that. I just need your product to do it's one thing very well, and then I will use a different product for a different thing I need.

shikhar 10/29/2025||
Postgres is a way better fit than Kafka if you want a large number of durable streams. But a flexible OLTP database like PG is bound to require more resources and polling loops (not even long poll!) are not a great answer for following live updates.

Plug: If you need granular, durable streams in a serverless context, check out s2.dev

dagss 7 days ago|
s2.dev looks cool... I jumped around the home page a bit and couldn't perfectly grasp what it is quickly though. But if it is about decoupling the Kafka approach and client side libraries from the use of Kafka specifically I am cheering for you.

Could you see using the s2.dev protocol on top of services using SQL in the way of the article, assigning event sequence numbers, as a good fit? Or is s2 fundamentally the component that assigns event numbers?

I feel like we tried to do something similar to you, but for SQL DBs, but am not sure:

https://github.com/vippsas/feedapi-spec

BinaryIgor 6 days ago||
That's golden:

"2. The other camp chases common sense

This camp is far more pragmatic. They strip away unnecessary complexity and steer clear of overengineered solutions. They reason from first principles before making technology choices. They resist marketing hype and approach vendor claims with healthy skepticism."

We should definitely apply Occam's razor as the industry far more often; simple tech stacks are better to manage and especially master (which you must do, once it's no longer a toy app). Introduce a new component into your system only if it provides functionality you cannot get with reasonable effort, using what you already have.

nchmy 7 days ago||
Seems like instead of a hand-rolled, polling Pub/sub, could instead do CDC instead with a golang logical replication/cdc library. There's surely various.

Or just use NATS for queues and pubsub - dead simple, can embed in your Go app and does much more than Kafka

natmaka 6 days ago||
IMHO the main difference between PostgreSQL and any 'competitor' is that in most cases a software developer will quickly find not only how to use it quite properly for his use case but also why some way he adopted isn't right and triggers some non-negligible problem.

There are many reasons for this: most software developers have more than a vague idea about its underlying concepts, most error messages are clear, the documentation is superb, there are many ways to tap into the vast knowledge of a huge and growing community...

nyrikki 7 days ago||
> The claim isn’t that Postgres is functionally equivalent to any of these specialized systems. The claim is that it handles 80%+ of their use cases with 20% of the development effort. (Pareto Principle)

Lots of us that built systems when SQL was the only option, know that doesn’t hold overtime.

SStable backed systems have their applications, and I have never seen dedicated Kafka teams like we used to have with DBAs

We have the tools to make decisions based on real tradeoffs.

I highly recommend people dig into the appropriate tools to select vs making pre-selected products fit an unknown problem domain.

Tools are tactics, not strategies, tactics should be changeable with the strategic needs.

phendrenad2 10/29/2025|
Since everyone is offering what they think the "camps" should be, here's another perspective. There are two camps: (A) Those who look at performance metrics ("96 cores to get 240MB/s is terrible") and assume that performance itself is enough to justify overruling any other concern (B) Those who look at all of the tradeoffs, including budget, maintenance, ease-of-use, etc.

You see this a lot in the tech world. "Why would you use Python, Python is slow" (objectively true, but does it matter for your high-value SaaS that gets 20 logins per day?)

More comments...