[1] https://www.pgedge.com/blog/postgresql-performance-tuning
Modern DDR4 memories have a theoretical throughput of 25-30 GB/s. This is more realistically ranging between 5-10 GB/s. With a 100 GB full packed shared buffer the time required to perform one single full scan ranges between 3 and 20 seconds.
Obviously DDR5 now exists and servers have multiple memory channels giving total memory bandwidth more like 200-500 GB/s. An old rule of thumb is that a computer should be able to read its entire memory in one second, although these days it may be more like 1-4 seconds.
The clock replacement algorithm only needs to read metadata, so a full sweep of the metadata for 100 GB of buffers should be milliseconds not seconds. (If they're talking about a table scan instead then obviously reading from buffers is going to be faster than disk.)
DDR4-3200 is ~26GB/s per channel, and is the upper end of what you'll see on ECC DDR4. DDR5-5600 is common now, and is ~45GB/s.
Zen 2/3 Epycs on SP3 have 8 channels, Zen 4/5 Epycs on the SP5 have 12 channels per socket, and with both you get to have two sockets. That'd be ~410GB/s on dual socket SP3 and ~1080GB/s on dual socket SP5.
So, yeah, RAM goes brrr.
For beginners, it is a huge footgun, that makes people assume bad performance while evaluating. For the experienced PG admin, it is an annoiance and a time waster. Oh, the VM just gained 64GB RAM? PG will sit there and stare at it.
Apart from that, basically everyone starts with the PG guidelines or a generated template(25% for this, 25% divided by number of sessions for that). Then you keep wondering how much performance you left on the table.