Posted by enether 7 days ago
1. Do you really need a queue? (Alternative: periodic polling of a DB)
2. What's your event volume and can it fit on one node for the foreseeable future, or even serverless compute (if not too expensive)? (Alternative: lightweight single-process web service, or several instances, on one node.)
3. If it can't fit on one node, do you really need a distributed queue? (Alternative: good ol' load balancing and REST API's, maybe with async semantics and retry semantics)
4. If you really do need a distributed queue, then you may as well use a distributed queue, such as Kafka. Even if you take on the complexity of managing a Kafka cluster, the programming and performance semantics are simpler to reason about than trying to shoehorn a distributed queue onto a SQL DB.
Unless employees expect that their best rewards are from making their current project as simple and effective as possible - it is highly unlikely that the current project will be as simple as it could be.
Schema design and query design will make or break your app’s ability to scale without skyrocketing the bill, it’s as simple as that.
This one is a classic for MSSQL, most of it is applicable on postgres.
I’m unsure what is unattractive about this but I guess anything can be a reason to spend a year playing with LLMs these days.
I’ve had the same problem with compliance work (lightly regulated market) and suddenly the scaling complaints go away when the renewals stop happening.
Which is wrong a lot of the time! You need to build what is needed. If only 10 people use your project, the design will be entirely different than if 10 million people use it
There is nothing wrong with building stuff, or career development. There is also nothing wrong with experimentation. You certainly would not want to incentivize the opposite behavior of never building anything unless it had 10 guarantors of revenue and technical soundness.
If you need people to focus, then you need them to be incentivized to focus. Do they see growth potential? Are they compensated such that other employers are undesirable? Do they see the risk of failure?
There's also a huge spectrum between "pick a job that's good for your career" and "at every step of the way I'll do whatever is best for me, the company and my coworkers be damned"
If you can't see that, just be open with it in the interview process
So sure "technically" it's not a queue, but in reality its used as a queue for 1000s of companies around the world for huge production workloads which no MQ system can support.
you really can't. getting per-message acks, dynamically scaling competing consumers without having to repartition while retaining ordering, etc. requires a ton of hacks like client side tracking / building your own storage on top of offset metadata / etc.. and you still won't have all of the features actual message queues provide.
to make it worse, there is very little public work/discussion on this so you'll be on your own to figure out all of the quirks. the only notable example is https://github.com/confluentinc/parallel-consumer which is effectively abandoned
Note: I haven’t had a chance to test it out in anger personally yet.
here's the equivalent in parallel-consumer: https://github.com/confluentinc/parallel-consumer?tab=readme...
We often had millions of players online at a given moment which means lots of transactions!
So the competing solutions are: PostgreSQL or Kafka+PostgreSQL
Kafka does provide some extrs there, handling load spikes, more clients that PG can handle natively and resilience to some DB downtime. But is it worth the complexity, in most cases no.
This IMO is better behaviour than RabbitMQ since you can always re-read messages once they have been processed, whereas generally with MQ systems the message is then marked for deletion and asynchronously deleted.
Exactly-once delivery is one of the hardest distributed systems problems. If you’ve “trivially” solved it, please show us your solution.
The trivial solution is to use Kafka. They're clearly saying that Kafka makes it trivial, not that it's trivial to solve from scratch.
I didn't say Kafka magically solves these problems for you, but it was required for the scalability we needed.
I can imagine, a 1 Billion dollar transaction accidentally gets processed by ten thousand client nodes due to a client app synchronization bug, company rethinks its dumb data dumper server strategy...news at 11.
You can make a bucket immutable, and entries have timestamps. I don't think any cloud provider makes claims about the accuracy or monoatomicity of these timestamps, so you would merely get an ordering, not necessarily the ordering in which things truly occurred. But I have use cases where that is fine.
I believe with a clever naming scheme and cooperating clients it could be made to work.
But the application was fairly low volume in Data and usage, So eventual consistency and capacity was not an issue. And yes timestamp monotonicity is not guaranteed when multiple client upload at the same time so unique id was given to each client at startup and used for to add guarantee of entries name. Metadata and prefix were used to indicate state of object during processing.
Not ideal, but it was cheaper that a DB or a dedicated MQ. The application did not last, but would try again the approach if adapted to stuation.
I was thinking that writes could be indexed/prefixed into timestamp buckets according to the clients local time. This can't be trusted, of course. But the application consumers could detect and reject any writes whose upload timestamp exceeds a fixed delta from the timestamp bucket it was uploaded to. That allows for arbitrary seeking to any point on the log.
They are still "listeners", it's that events aren't pushed to the listener. Instead the API is designed for sheer throughput.
That was all deliberately pushed onto consumers to manage to achieve scale.
I believe RabbitMQ is much more balanced and is closer to what people expect from a high level queueing/pubsub system.
Acking in an MQ is very different.
In my experience it’s not the reads, but the writes that are hard to scale up. Reading is cheap and can be sometimes done off a replica. Writing to a PostgreSQL at high sustained rate requires careful tuning and designs. A stream of UPDATEs can be very painful, INSERTs aren’t cheap, and even a batched COPY blocks can be tricky.
The scars from that kind of outage will never truly heal.
0: https://www.enterprisedb.com/blog/impact-full-page-writes
MQTT -> Redpanda (for message logs and replay, etc) -> Postgres/Timescaledb (for data) + S3 (for archive)
(and possibly Flink/RisingWave/Arroyo somewhere in order to do some alerting/incrementally updated materialized views/ etc)
this seems "simple enough" (but I don't have any experience with Redpanda) but is indeed one more moving part compared to MQTT -> Postgres (as a queue) -> Postgres/Timescaledb + S3
Questions:
1. my "fear" would be that if I use the same Postgres for the queue and for my business database, the "message ingestion" part could block the "business" part sometimes (locks, etc)? Also perhaps when I want to update the schema of my database and not "stop" the inflow of messages, not sure if this would be easy?
2. also that since it would write messages in the queue and then delete them, there would be a lot of GC/Vacuuming to do, compared to my business database which is mostly append-only?
3. and if I split the "Postgres queue" from "Postgres database" as two different processes, of course I have "one less tech to learn", but I still have to get used to pgmq, integrate it, etc, is that really much easier than adding Redpanda?
4. I guess most Postgres queues are also "simple" and don't provide "fanout" for multiple things (eg I want to take one of my IoT message, clean it up, store it in my timescaledb, and also archive it to S3, and also run an alert detector on it, etc)
What would be the recommendation?
There are globally shared resources, but for the most part, locks are held on specific rows or tables. Unrelated transactions generally won't block on each other.
Also running a Very High Availability cluster is non-trivial. It can take a minute to fail over to a replica, and a busy database can take a while to replay the WAL after a reboot before it's functional again. Most people are OK with a couple minutes of downtime for the occasional reboot though.
I think this really depends on your scale. Are you doing <100 messages/second? Definitely stick with postgres. Are you doing >100k messages/second? Think about Kafka/redpanda. If you were comfortable with postgres (or you will be since you are building the rest of your project with it), then you want to stick with postgres longer, but if you are barely using it and would struggle to diagnose an issue, then you won't benefit from consolidating.
Postgres will also be more flexible. Kafka can only do partitions and consumer groups, so if your workload doesn't look like that (e.g. out of order processing), you might be fighting Kafka.
MQTT -> Postgres (+ S3 for archive)
> 1. my "fear" would be that if I use the same Postgres for the queue and for my business database...
This is a feature, not a bug. In this way you can pair the handling of the message with the business data changes which result in the same transaction. This isn't quite "exactly-once" handling, but it's really really close!
> 2. also that since it would write messages in the queue and then delete them, there would be a lot of GC/Vacuuming
Generally it's best practice in this case to never delete messages from a SQL "queue", but toggle them in-place to consumed and periodically archive to a long-term storage table. This provides in-context historical data which can be super helpful when you need to write a script to undo or mitigate bad code which resulted in data corruption.
Alternatively when you need to roll back to a previous state, often this gives you a "poor woman's undo", by restoring a time-stamped backup, copying over messages which arrived since the restoration point, then letting the engine run forwards processing those messages. (This is a simplification of course, not always directly possible, but data recovery is often a matter of mitigations and least-bad choices.)
Basically, saving all your messages provides both efficiency and data recovery optionality.
> 3...
Legit concern, particularly if you're trying to design your service abstraction to match an eventual evolution of data platform.
> 4. don't provide "fanout" for multiple things
What they do provide is running multiple handling of a queue, wherein you might have n handlers (each with its own "handled_at" timestamp column in the DB), and different handles run at different priorities. This doesn't allow for workflows (ie a cleanup step) but does allow different processes to run on the same queue with different privileges or priorities. So the slow process (archive?) could run opportunistically or in batches, where time-sensitive issues (alerts, outlier detection, etc) can always run instantly. Or archiving can be done by a process which lacks access to any user data to algorithmically enforce PCI boundaries. Etc.
That’s a particularly nasty trap. Devs will start using this everywhere and it makes it very hard to move this beyond Postgres when you need to.
I’d keep a small transactional outbox for when you really need it and encourage devs to use it only when absolutely necessary.
I’m currently cleaning up an application that has reached the limit of vertical scaling with Postgres. A significant part of that is because it uses Postgres for every background work queue. Every insert into the queue is in a transaction—do you really want to rollback your change because a notification job couldn’t be enqueued? Probably not. But the ability is there and is so easy to do that it gets overused.
Now I get to go back through hundreds of cases and try to determine whether the transactional insert was intentional or just someone not thinking.
Ignoring the potential uses for this data, what you suggested has the exact same effect on Postgres at a tuple level. An UPDATE is essentially the same as a DELETE + INSERT, due to its MVCC implementation. The only way around this is with a HOT update, which requires (among other things) that no indexed columns were updated. Since presumably in this schema you’d have a column like is_complete or is_deleted, and a partial index on it, as soon as you toggle it, it can’t do a HOT update, so the concerns about vacuum still apply.
Until your postgresql instance goes down (even by reasons unrelated to pgsql) and then you have no fallback or queue for elasticity
Is this some scripting to automate your home, or are you trying to build some multi-tenant thing that you can sell?
If it's just scripting to automate your home, then you could probably get away with a single server and on-disk/in-memory queuing, maybe even sqlite, etc. Or you could use it as an opportunity to learn those technologies, but you don't really need them in your pipeline.
It's amazing how much performance you can get as long as the problem can fit onto a single node's RAM/SSD.
n) Do you really need S3? is it cheaper than NFS storage on a compute node with a large disk?
There are many cases where S3 is absolutely cheaper though.
Your application think it's a normal disk but it isn't, so you get no timeouts, no specific errors for network issues and extremely expensive calls camouflage as quick FS ops (was any file modified in this folder ? I'll just loop over them using my standard library nice FS utilities). And you don't get atomic ops outside of mv, invalidation and caching are complicated and your developers probably don't know the semantics of FS operations, which are much more complex and less well documented than eg Redis blob storage.
And then when you finally rip out NFS, you have thousands of lines of app and test code that assumes your blobs are on a disk in subtle ways.
I would think something like NFS is best suited for an actual file instead of blob you're serializing using a file system api?
You can run into issues with scheduled queues (e.g. run this job in 5 minutes) since the tables will be bigger, you need an index, and you will create the garbage in the index at the point you are querying (jobs to run now). This is a spectacularly bad pattern for postgres at high volume.
Doesn't PostgreSQL have transactional schema updates as a key feature? AIUI, you shouldn't be having any data loss as a result of such changes. It's also common to use views in order to simplify the management of such updates.
Then, if you ever need to switch to something more performant, it will be relatively easy.
It's a queue... how bad can you screw this up? My guess is, in most corporate environment, very very badly. Somehow something as complicated as consuming a queue (which isn't very complicated at all) will be done in such a way that it will require many months to change which queue is used in the future.
I'm a java dev and maybe my projects are about big integrations, but I've always needed queue like constructs and polling from a db was almost always a headache, especially with multiple consumers and publishers.
Sure it can be done, and in many projects we do have cron-jobs on different pods -- not a global k8s cron-job, but legacy cron jobs and it works fine.
Kafka does not YET support real queue (but I'm sure there's a high profile KIP to have true queue like behavior, per consumer group, with individual commits), and does not support server side filtering.
But consumer groups and partitions have been such a blessing for me, it's very hard to overstate how useful they are with managing stateful apps.
But then distributed queue is most likely not needed until you hit really humongous scale.
10 message/s is only 860k/day. But in my testing (with postgres 16) this doesn't scale that well when you are needing tens to hundreds of millions per day. Redis is much better than postgres for that (for a simple queue), and beyond that kafka is what I would choose in you're in the low few hundred million.
If someone is talking about per day numbers or per month numbers they're likely doing it to have the numbers sound more impressive and to make it harder to see how few X per second they actually handled. 11 million events per day sounds a whole lot more impressive than 128 events per second, but they're the same thing and only the latter usually matters in these types of discussions.
Periodic polling is awkward on both sides: you add arbitrary latency _and_ increase database load proportional to the number of interested clients.
Events, and ideally coalesced events, serve the same purpose as interrupts in a uniprocess (versus distributed) system, even if you don't want a proper queue. This at least lets you know _when_ to poll and lets you set and adjust policy on when / how much your software should give a shit at any given time.
The limiting factor for most workloads will probably be the number of connections, and the read/write mix. When you get into hundreds or thousands of pollers and writing many things to the queue per second Postgres is going to lose its luster for sure.
But in my experience with small/medium companies, a lot of workloads fit very very comfortably into what Postgres can handle easily.
The publisher has a set of tables (topics and partitions) of events, ordered and with each event having an assigned event sequence number.
Publisher stores no state for consumers in any way.
Instead, each consumer keeps a cursor (a variable holding an event sequence number) indicating how far it has read for each event log table it is reading.
Consumer can then advance (or rewind) its own cursor in whatever way it wishes. The publisher is oblivious to any consumer side state.
This is the fundamental piece of how event log publishing works (as opposed to queues which is something else entirely; and the article talks about both usecases).
Then you just query from event_receiver_svcX side, for events published > datetime and event_receiver_svcX = FALSE. Once read set to TRUE.
To mitigate too many active connections have a polling / backoff strategy and place a proxy infront of the actual database to proactively throttle where needed.
But event table:
| event_id | event_msg_src | event_msg | event_msg_published | event_receiver_svc1 | event_receiver_svc2 | event_receiver_svc3 |
|----------|---------------|---------------------|---------------------|---------------------|---------------------|---------------------|
| evt01 | svc1 | json_message_format | datetime | TRUE | TRUE | FALSE |
That sounds distributed to me, even if it wires different tech together to make it happen. Is there something about load balancing REST requests to different DB nodes that is less complicated than Kafka?
To be clear I wasn't talking about DB nodes, I was talking about skipping an explicit queue altogether.
But let's say you were asking about load balancing REST requests to different backend servers:
Yes, in the sense that "load balanced REST microservice with retry logic" is such a common pattern that is better understood by SWE's and SRE's everywhere.
No, in the sense that if you really did just need a distributed queue then your life would be simpler reusing a battle-tested implementation instead of reinventing that wheel.
Kafka has its own foibles and isn't a trivia set-it-and-forget it to run at scale.
The Pareto principle is not some guarantee applicable to everything and anything saying that any X will handle 80% of some other thing's use cases with 20% the effort.
One can see how irrelevant its invocation is if we reverse: does Kafka also handle 80% of what Postgres does with 20% the effort? If not, what makes Postgres especially the "Pareto 80%" one in this comparison? Did Vilfredo Pareto had Postgres specifically in mind when forming the principle?
Pareto principle concerns situations where power-law distributions emerge. Not arbitrary server software comparisons.
Just say Postgres covers a lot of use cases people mindlessly go to shiny new software for that they don't really need, and is more battled tested, mature, and widely supported.
The Pareto principle is a red herring.
>The Pareto principle is not some guarantee applicable to everything and anything
Yes, obviously. The author doesn't say otherwise. There are obviously many ways of distributing things.
>One can see how irrelevant its invocation is if we reverse: does Kafka also handle 80% of what Postgres does with 20% the effort?
No
>If not, what makes Postgres especially the "Pareto 80%" one in this comparison?
Because its simpler.
What implies everything can handle 80% of use cases with 20% of effort? It's like saying:
If normal distributions are real, and human height is normally distributed, then why isnt personal wealth? They are just different distributions.
Let me explain then...
> Yes, obviously. The author doesn't say otherwise.
Someone doesn't need to spell something out explicitly to imply it, or to fall to the kind of mistake I described.
While the author might not say otherwise, they do invoke the Pareto principle out of context, as if it's some readily applicable theorem.
>>If not, what makes Postgres especially the "Pareto 80%" one in this comparison? > Because its simpler.
Something being simpler than another thing doesn't make it a Pareto "80%" thing.
Yeah, I know you don't say this is always the case explicitly. But, like with the OP, your answer uses this as if it's an argument in favor of smething being the "Pareto 80%" thing.
It just makes it simpler. In the initial Pareto formulation is was about wealth accumulation even, which has nothing to do with simplicity or features or even with comparing different classes of things (both sides referred to the same thing, people. Specifically about 20% of the population owning 80% of the land).
The author's claim is that postgres handles 80% of the use cases with 20% of the effort. It does not follow, as you recognize, that therefor EVERYTHING handles 80% of the uses cases with 20% of the effort. Where does the author imply that it does? Based on his characterization of Kafka I think he's say it solves a minority of use cases with a lot more effort.
I mean isnt this the explicitly made point by the author? That Kafka's use-case/effort distribution is different (worse)?
For very simple software, most users use all the features. For very specialized software, there's very few users, and they use all the features.
> The claim is that it handles 80%+ of their use cases with 20% of the development effort. (Pareto Principle)
This is different units entirely! Development effort? How is this the Pareto Principle at all?
(To the GP's point, would "ls" cover 80% of the use cases of "cut" with 20% of the effort? Or would MS Word cover 80% of the use cases of postgresql with 20% of the effort? Because the scientific Pareto Principle tells us so?)
Hey, it's really not important, just an idea that with Postgres you can cover a lot of use cases with a lot less effort than configuring/maintaining a Kafka cluster on the side, and that's plausible. It's just that some "nerds" who care about being "technically correct" object to using the term "pareto principle" to sound scientific here, that bit is just nonsense.
First, it would be inverse, not reverse.
Second, no it doesn't work that way, that's the point of the Pareto principle in the first place, what is 80% is always 80% and what is 20% is always 20%.
I know, since that's the whole point I was making. That the OP picked an arbitrary side to give the 80%, and that one could just as well pick the other one, and that you need actual arguments (and some kind of actual measurable distribution) to support one or the other being the 80% (that is, merely invoking the Pareto principle is not an argument).
I've been a happy Postgres user for several decades. Postgres can do a lot! But like anything, don't rely on maxims to do your engineering for you.
If you rent a cloud DB then it can scale elastically which can make this cheaper than Postgres, believe it or not. Cloud databases are sold at the price the market will bear not the cost of inputs+margin, so you can end up paying for Postgres as much as you would for an Oracle DB whilst getting far fewer features and less scalability.
Source: recently joined the DB team at Oracle, was surprised to learn how much it can do.
Wouldn't OrioleDB solve that issue though?
Postgres isn’t meant to be a guaranteed permanent replacement.
It’s a common starting point for a simpler stack which can retain a greater deal of flexibility out of the box and increased velocity.
Starting with Postgres lets the bottlenecks reveal themselves, and then optimize from there.
Maybe a tweak to Postgres or resources, or consider a jump to Kafka.
It often doesn't.
[1] https://rubyonrails.org/2024/11/7/rails-8-no-paas-required
Also, LISTEN/NOTIFY do not scale, and they introduce locks in areas you aren't expecting - https://news.ycombinator.com/item?id=44490510
But anytime you treat a database, or a queue, like a black box dumpster, problems will ensue.
Of course the other 99% is the remaining 1%.
And the thing is, a server from 10 years ago running postgres (with a backup) is enough for most applications to handle thousands of simultaneous users. Without even going into the kinds of optimization you are talking about. Adding ops complexity for the sake of scale on the exploratory phase of a product is a really bad idea when there's an alternative out there that can carry you until you have fit some market. (And for some markets, that's enough forever.)
When you’re doing hundreds or thousands of transactions to begin with it doesn’t really impact as much out of the gate.
Of course there will be someone who will pull out something that won’t work but such examples can likely be found for anything.
We don’t need to fear simplification, it is easy to complicate later when the actual complexities reveal themselves.
Naive approach with sequence (or serial type which uses sequence automatically) does not work. Transaction "one" gets number "123", transaction "two" gets number "124". Transaction "two" commits, now table contains "122", "124" rows and readers can start to process it. Then transaction "one" commits with its "123" number, but readers already past "124". And transaction "one" might never commit for various reasons (e.g. client just got power cut), so just waiting for "123" forever does not cut it.
Notifications can help with this approach, but then you can't restart old readers (and you don't need monotonic numbers at all).
https://www.oreilly.com/library/view/designing-data-intensiv...
You can generate distributed monotonic number sequences with a Lamport Clock.
https://en.wikipedia.org/wiki/Lamport_timestamp
The wikipedia entry doesn't describe it as well as that book does.
It's not the end of the puzzle for distributed systems, but it gets you a long way there.
See also Vector clocks. https://en.wikipedia.org/wiki/Vector_clock
Edit: I've found these slides, which are a good primer for solving the issue, page 70 onwards "logical time":
https://ia904606.us.archive.org/32/items/distributed-systems...
Another way to speed it up is to grab unique numbers in batches instead of just getting them one at a time. No idea why you want your numbers to be in absolute sequence. That's hard in a distributed system. Probably best to relax that constraint and find some other way to track individual pieces of data. Or even better, find a way so you don't have to track individual rows in a distributed system.
If you would rather have readers waiting and parallel writers there is a more complex scheme here: https://blog.sequinstream.com/postgres-sequences-can-commit-...
In a sense this is what Kafka IS architecturally: The component that assigns event sequence numbers.
Another approach which I used in the past was to assign sequence numbers after committing. Basically a separate process periodically scans the set of un-sequenced rows, applies any application defined ordering constraints, and writes in SNs to them. This can be surprisingly fast, like tens of thousands of rows per second. In my case, the ordering constraints were simple, basically that for a given key, increasing versions get increasing SNs. But I think you could have more complex constraints, although it might get tricky with batch boundaries
There needs to be serialization happening somewhere, either by writers or readers waiting for their turn.
What Kafka "is" in my view is simply the component that assigns sequential event numbers. So if you publish to Kafka, Kafka takes the same locks...
How to increase throughput is add more shards in a topic.
Isn't it a bit of a white whale thing that a umion can solve all one's subscriber problems? Afaik even with kafka this isn't completely watertight.
So it is a much more serious issue at stake here than event ordering/consistency.
As it happens, if you use event log tables in SQL "the Kafka way" you actually get guarantee on event ordering too as a side effect, but that is not the primary goal.
More detailed description of problem:
https://github.com/vippsas/mssql-changefeed/blob/main/MOTIVA...
A downside is the lack of tooling client side. For many using Kafka is worth it simply for the tooling in libraries consumer side.
If you just want to write an event handler function there is a lot of boilerplate to manage around it. (Persisting read cursors etc)
We introduced a company standard for one service pulling events from another service that fit well together with events stored in SQL.
https://github.com/vippsas/feedapi-spec
Nowhere close to Kafka's maturity in client side tooling but it is an approach for how a library stack could be built on top making this convenient and have the same library toolset support many storage engines. (On the server/storage side, Postgres is of course as mature as Kafka...)
Not your parent poster, but Kafka is often treated like a message broker and it ain't that. Specifically, it has no concept of NACK-ing messages, it is either processed or not processed. There's no way to the client to say "skip this message and hand it to another worker" or "I have this weird message but I don't know how to process it, can you take it back?".
What people very commonly do is to instead move the unprocessed message to a dead-letter-queue, which at least clears the upstream queue but means you have to sift through the dead-letter-queue and figure out how to rescue messages.
Also people often think "I can read 100 messages in a batch and handle them individually in the client" while not considering that if some of the messages fail to send (or crash the client, losing the entire batch), Kafka isn't monitoring to say "hey you haven't verified that message 12 and 94 got processed correctly, do you want to keep working on them or should I assign them to someone else?"
Basically, in Kafka, the offset pointer should only be incremented after the client is 100% sure it is done with the message and the output has been written to durable storage if you care about the outcome. Otherwise you risk "skipping" messages because the client crashed or otherwise burped when trying to process the message.
Also Kafka topic partitions are semi-parallel streams that are not necessarily time ordered relative to each other... It's just another pinch point.
Consider exploring NATS Jetstream and its MQTT 3.1.1 mode and see if it suits your MQTT needs? Also I love Bento for declarative robust streaming ETL.
If you don't need what kafka offers, don't use it. But don't pretend you're on to something with your custom 5k msg/s PG setup.
https://www.youtube.com/watch?v=7CdM1WcuoLc
Getting even less than that throughput on 3x c7i.24xlarge — a total of 288 vCPUs – is bafflingly wasteful.
Just because you can do something with Postgres doesn't mean you should.
> 1. One camp chases buzzwords.
> 2. The other camp chases common sense
In this case, is "Postgres" just being used as a buzzword?
[Disclosure: I work for Redpanda; we provide a Kafka-compatible service.]
Kafka is a full on steaming solution.
Postgres isn’t a buzzword. It can be a capable placeholder until it’s outgrown. One can arrive at Kafka with a more informed run history from Postgres.
Freudian slip? ;)
When you have C++ code, the number of external folks who want to — and who can effectively, actively contribute to the code — drops considerably. Our "cousins in code," ScyllaDB last year announced they were moving to source-available because of the lack of OSS contributors:
> Moreover, we have been the single significant contributor of the source code. Our ecosystem tools have received a healthy amount of contributions, but not the core database. That makes sense. The ScyllaDB internal implementation is a C++, shard-per-core, future-promise code base that is extremely hard to understand and requires full-time devotion. Thus source-wise, in terms of the code, we operated as a full open-source-first project. However, in reality, we benefitted from this no more than as a source-available project.
Source: https://www.scylladb.com/2024/12/18/why-were-moving-to-a-sou...
People still want to get free utility of the source-available code. Less commonly they want be able to see the code to understand it and potentially troubleshoot it. Yet asking for active contribution is, for almost all, a bridge too far.
(Notably, they're not arguing that open source reusers have been "unfair" to them and freeloaded on their effort, which was the key justification many others gave for relicensing their code under non-FLOSS terms.)
In case anyone here is looking for a fully-FLOSS contender that they may want to perhaps contribute to, there's the interesting project YugabyteDB https://github.com/yugabyte/yugabyte-db
(Personally, I have no issues with the AGPL and Stallman originally suggested this model to Qt IIRC, so I don't really mind the original split, but that is the modern intent of the strategy.)
Again, I'm happy to use AGPL software, I just disagree that the intent here is that different to any of the other projects that switched to the proprietary BSL.
As a maintainer of several free software projects, there are lots of issues with how projects are structured and user expectations, but I struggle to see how proprietary licenses help with that issue (I can see -- though don't entirely buy -- the argument that they help with certain business models, but that's a completely different topic). To be honest, I have no interest in actively seeking out proprietary software, but I'm certainly in the minority on that one.
On the other hand, if they don't expect people outside their company to know C++ well enough to contribute usefully, they probably shouldn't expect people outside their company to be able to compete with them either.
Really, though, the reason to go open-source is because it benefits your customers, not because you get contributions, although you might. (This logic is unconvincing if you fear they'll stop being your customers, of course.)
A colleague and I (mostly him, but on my advice) worked up a set of patches to accept and emit JSON and YAML in the CLI tool. Our use case at the time was setting things up with a config management system using the already built tool RedPanda provides without dealing with unstructured text.
We got a lot of good use out of RedPanda at that org. We’ve both moved on to a new employer, though, and the “no offering RedPanda as a service” spooked the company away from trying it without paying for the commercial package. Y’all assured a couple of us that our use case didn’t count as that, but upper management and legal opted to go with Kafka just in case.
"The use of fsync is essential for ensuring data consistency and durability in a replicated system. The post highlights the common misconception that replication alone can eliminate the need for fsync and demonstrates that the loss of unsynchronized data on a single node still can cause global data loss in a replicated non-Byzantine system."
However, for all that said, Redpanda is still blazingly fast.
https://www.redpanda.com/blog/why-fsync-is-needed-for-data-s...
To be fair, since without fsync you don't have any ordering guarantees for your writes, a crash has a good chance of corrupting your data, not just losing recent writes.
That's why in PostgreSQL it's feasible to disable https://www.postgresql.org/docs/18/runtime-config-wal.html#G... but not to disable https://www.postgresql.org/docs/18/runtime-config-wal.html#G....
Can you lose one Postgres instance?
AKA "Medium Data" ?
It also scales to very large clusters.
It's very common to start adding more and more infra for use cases that, while technically can be better cover with new stuff, it can be served by already existing infrastructure, at least until you have proof that you need to grow it.
This is literally the point the author is making.
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize."
It's not so hard. You interpret it how it is written. Yes, they say one camp chases buzzwords and another chases common sense. Critique that if you want to. That's fine.
But what's not written in the OP is some sort of claim that Postgres performs better than Kafka. The opposite is written. The OP acknowledges that Kafka is fast. Right there in the title! What's written is OP's experiments and data that shows Postgres is slow but can be practical for people who don't need Kafka. Honestly I don't see anything bewildering about it. But if you think they're wrong about Postgres being slow but practical that's something nice to talk about. What's not nice is to post snarky comments insinuating that the OP is asking you to design unscalable solutions.
In this case, you do just need a single fuel truck. That's what it was built for. Avoiding using a design-for-purpose tool to achieve the same result actually is wasteful. You don't need 288 cores to achieve 243,000 messages/second. You can do that kind of throughput with a Kafka-compatible service on a laptop.
[Disclosure: I work for Redpanda]
Kafka et al definitely have their place, but I think most people would be much better off reaching for a simpler queue system (or for some things, just using Postgres) unless you really need the advanced features.
* Lack of interest for other team members, which translated to doing what they thought was a sufficiently minimal amount of knowledge transfer
* An (unwise) attitude that "it's already set up and configured, and terraformed, so we can just acquire that knowledge if and when it's needed"
* Kafka guy left a lot faster than anybody really expected, not leaving much time and practically no documentation
* The rest of the team was already overwhelmed with other responsiblities and didn't have much bandwidth available
* Nobody wanted to be the person/people that ended up "owning" it, so there was a reverse incentive
Postgres is the solution in question of the article because I simply assume the majority of companies will start with Postgres as their first piece of infra. And it is often the case. If not - MySQL, SQLite, whatever. Just optimize for the thing you know, and see if it can handle your use case (often you'll be surprised)
You should be able to install within minutes.
None of this applies to Redpanda.
Yet to also be fair to the Kafka folks, Zookeeper is no longer default and hasn't been since April 2025 with the release of Apache Kafka 4.0:
"Kafka 4.0's completed transition to KRaft eliminates ZooKeeper (KIP-500), making clusters easier to operate at any scale."
Source: https://developer.confluent.io/newsletter/introducing-apache...
> This is literally the point the author is making.
Exactly! I just don't understand why HN invariably always tends to bubble up the most dismissive comments to the top that don't even engage with the actual subject matter of the article!
Obviously it's possible to build, for example, a machine with 2 cores, a 10Gbps network link, and a single HDD that would falsify my statement.
CPU is more tricky but I’m sure it can be shown somehow
It's a fair point that if you already have a pgsql setup, and only need a few messages here and there, then pg is fine. But yeah, the 96 vcpu setup is absurd.
Is anyone actually reading the full article, or just reacting to the first unimpressive numbers you can find and then jumping on the first dismissive comment you can find here?
Benchmarking Kafka isn't the point here. The author isn't claiming that Postgres outperforms Kafka. The argument is that Postgres can handle modest messaging workloads well enough for teams that don't want the operational complexity of running Kafka.
Yes, the throughput is astoundingly low for such a powerful CPU but that's precisely the point. Now you know how well or how bad Postgres performs on a beefy machine. You don't always need Kafka-level scale. The takeaway is that Postgres can be a practical choice if you already have it in place.
So rather than dismissing it over the first unimpressive number you find, maybe respond to that actual matter of TFA. Where's the line where Postgres stops being "good enough"? That'll be something nice to talk about.
Or they could have not mentioned kafka at all and just demonstrated their pub/sub implementation with PG. They could have not tried to make it about the buzzword resume driven engineering people vs. common sense folks such as himself.
I'm glad the OP benchmarked on the 96 vCPU server. So now I know how well Postgres performs on a large CPU. Not very well. But if the OP had done their benchmark on a low CPU, I wouldn't have learned this.
I might well be talking out of my arse but if you're going to implement pub/sub in Postgres, it'd be worth designing around its strengths and going back to basics on event sourcing.
Never used Kafka myself, but we extensively use Redis queues with some scripts to ensure persistency, and we hit throughputs much higher than those in equivalent prod machines.
Same for Redis pubsubs, but those are just standard non-persistent pubsubs, so maybe that gives it an upper edge.
zeek -> kafka -> logstash -> elastic
There's poles.
1. Is folks constantly adopting the new tech, whatever the motivation, and 2. I learned a thing and shall never learn anything else, ever.
Of course nobody exists actually on either pole, but the closer you are to either, the less pragmatic you are likely to be.
I think it's still just 2 poles. However, I probably shouldn't have prescribed motivation to latter pole, as I purposely did not with the former.
Pole 2 is simply never adopt anything new ever, for whatever the motivation.
In my company most of our topics need to be consumed by more than one application/team, so this feature is a must have. Also, the ability to move the offset backwards or forwards programmatically has been a life saver many times.
Does Postgres support this functionality for their queues?
So if you want an individual offset, then yes, the consumer could just maintain their own… however, if you want a group’s offset, you have to do something else.
Is a queuing system baked into Postgres? Or there client libraries that make it look like one?
And do these abstractions allow for arbitrarily moving the offset for each consumer independently?
If you're writing your own queuing system using pg for persistence obviously you can architect it however you want.
I don't know what kind of native support PG has for queue management, the assumption here is that a basic "kill the task as you see it" is usually good enough and the simplicity of writing and running a script far outweighs the development, infrastructure and devops costs of Kafka.
But obviously, whether you need stuff to happen in 15 seconds instead of 5 minutes, or 5 minutes instead of an hour is a business decision, along with understanding the growth pattern of the workload you happen to have.
Here is one: https://pgmq.github.io/pgmq/
Some others: https://github.com/dhamaniasad/awesome-postgres
Most of my professional life I have considered Postgres folks to be pretty smart… while I by chance happened to go with MySQL and it became the rdbms I thought in by default.
Heavily learning about Postgres recently has been okay, not much different than learning the tweaks for msssl, oracle or others. Just have to be willing to slow down a little for a bit and enjoy it instead of expecting to thrush thru everything.
But it looks like a queue, which is a fundamentally different data structure from an event log, and Kafka is an event log.
They are very different usecases; work distribution vs pub/sub.
The article talks about both usecases, assuming the reader is very familiar with the distinction.
How is it common sense to try to re-implement Kafka in Posgres? You probably need something similar but more simple. Then implement that! But if you really need something like Kafka, then .. use Kafka!
IMO the author is now making the same mistake as some Kafka evangelists that try to implement a database in Kafka.
That said, I don't consider running Kafka to be a headache. I work at a mid-sized company, processing billions of Kafka events per day and it's never been a problem, even locally when I'm processing hundreds of events per day.
You set it up, forget about it, and it scales endlessly. You don't have to rewrite anything and it provides a nice separation layer between your system components.
When starting out, you can easily run Kafka, DB, API on the same machine.
Vendors frequently push that narrative so they can sell their own managed (or proprietary) solution on it. With a decent AI model (e.g ChatGPT Pro), it's easier than ever to figure out best practices and conventions.
That being said, my point is more about the organizational overhead. Deploying Kafka still means you need to learn how it works, why it's good, its configs, API, how to debug it, set up obesrvability, yada yada.
Except that the burden is on all clients to coordinate to avoid processing an event more than once since Kakfa is a brainless invention just dumping data forever into a serial log.
Do you mean different consumers within the same consumer group? There's no technology out there that will guarantee exactly-once delivery, it's simply impossible in a world where networks aren't magically 100% reliable. SQS, RedPanda, RabbitMQ, NATS... you call it, your client will always need idempotency.
The author is suggesting to avoid this solution and roll your own instead.