Posted by jnord 1 day ago
> The change isn't about the core operating system becoming resource-hungry. Instead, it reflects the way people use computers today—multiple browser tabs, web apps, and multitasking workflows, all of which demand additional memory.
So it is more about the 3rd party software instead of OS or desktop environment. Actually, nowadays it's recommended to have 8+ GB of RAM, regardless of OS.
I just checked the memory usage on Ubuntu 24.04 LTS after closing all the browser tabs. It's about 2GB of 16GB total RAM. 26.04 LTS might have higher RAM usage but it seems unlikely that it will get anywhere close to 6GB.
https://www.microsoft.com/en-us/windows/windows-11-specifica...
4GB of RAM? What? I guess if your minimum is "able to start Windows and eventually reach the desktop", sure? I wouldn't even use Windows 11 with 8GB even though it would theoretically be okay.
Not okay as soon as you throw on the first security tool, lol.
I work in an enterprise environment with Win 11 where 16 GB is capped out instantly as soon as you open the first browser tab thanks to the background security scans and patch updates. This is even with compressed memory paging being turned on.
Apparently 13.6GB is in use (out of 64GB), and of that 4.7GB is Chrome. Yeah, I'm glad I'm not running this on an 8GB machine!
Current minimum specs should be more like
2 cores, 2ghz min with SSE4.2
128GB SSD
8GB RAM
I just tested this with 25.10 desktop, default gnome. With 24.04 LTS it doesn't even start up with 2GiB.
Apparently it's still in discussion but it's April now so seems unlikely.
Kind of weird how controversial it is considering DOS had QEMM386 way back in 1987.
Found good visual explainer on this - https://vectree.io/c/linux-virtual-memory-swap-oom-killer-vs...
That's why SoftRAM gained infamy - they discovered during testing that swapping was so much faster than compression that the released version simply doubled the Windows swap file size and didn't actually compress RAM at all, despite their claims (and they ended up being sued into oblivion as a result...)
Over on the Mac, RAMDoubler really did do compression but it a) ran like treacle on the 030, b) needed to do a bunch of kernel hacks, so had compatibility issues with the sort of "clever" software that actually required most RAM, and c) PowerMac users tended to have enough RAM anyway.
Disk compression programs were a bit more successful - DiskDoubler, Stacker, DoubleSpace et al. ISTR that Microsoft managed to infringe on Stacker's patents (or maybe even the copyright?) in MS DOS 6.2, and had to hastily release DOS 6.22 with a re-written version free of charge as a result. These were a bit more successful because they coincided with a general reduction in HDD latency that was going on at roughly the same time.
There's rather a lot of information in a single uncompressed 1080p image. I can't help but wonder what it all gets used to for.
For example, Java applications will claim much more memory than they need for the heap. Most of that memory will be unused, but it's necessary to have a faster running application. If you've ever run a Java app at consistently 90% heap usage, you know it grinds to an absolute halt with constant collection.
The same is true for caching techniques. Reading from storage is slow, so it often makes sense to put stuff in RAM even if you're not using it very often.
But I remember in 2016 Fedora Gnome consumed about 1.6GB of RAM on my PC with 2GB of RAM a decade ago. Considering that after a decade the standard Ubuntu Gnome consumes only 400MB more RAM and also that my new laptop has 16GB of RAM (the system might use more RAM when more RAM is installed), I think the increase is not that bad for a decade. I thought it would be much worse.
What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?
> What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?
It’s not a factor of ten, but a 4K monitor has about four times as many pixels. Cached font bitmaps scale with that, photos take more memory, etc.
> When Windows 2000 came out
In those times, when part of a window became uncovered, the OS would ask the application to redraw that part. Nowadays, the OS knows what’s there because it keeps the pixels around, so it can bitblit the pixels in.
Again, not a factor of ten, but it contributes.
The number of background processes likely also increased, and chances are you used to run fewer applications at the same time. Your handful of terminals may be a bit fuller now than it was back then.
Neither of those really explain why you need gigabytes of RAM nowadays, though, but they didn’t explain why Windows 2000 needed whatever it needed at its time, either.
The main real reason is “because we can afford to”.
Partly because increased RAM usage can sometimes improve execution speed / smoothness or security (caching, browser tab isolation).
Partly because developers have less pressure to optimize software performance, so they optimize other things, such as development time.
Here is an article about bloat: https://waspdev.com/articles/2025-11-04/some-software-bloat-...
But raw imagery is one of the few cases where you can legitimately require large amounts of RAM because of the squaring nature of area. You only need that raw state in a limited number of situations where you are manipulating the data though. If you are dealing with images without descending to pixels then there's pretty much no reason to keep it all floating around in that form, You generally don't have more than a hundred icons onscreen, and once you start fetching data from the slowest RAM in your machine you get pretty decent speed gains from using decompression than trying to move the uncompressed form around.
The culprit is browsers, mostly.
Minimum memory as in this change sets a completely different expectation.
I suppose the most major change on RAM usage is electron and the bloated world of text editors and other simple apps written in electron.
For people wanting the old-fashioned fast and simple GUI experience, I recommend LXQt.
So basically, no use when you've not tasted 120+Hz displays. And don't because once you do, you won't go back.
But for gaming, it really is hard to go back to 60.
Incredibly, Linux has better support than windows for it on the desktop: DWM runs full blast, while sway supports VRR on the desktop. Windows will only enable it for games (and games that support it). Disclaimer: Wayland compositor required.
It’s not enabled by default on e.g. sway because on some GPU and monitor combos, it can make the display flicker. But if you can, give it a try!
* The minimum variable refresh rate my display (LG C4) supports is 40 Hz.
But it has been a while since I've tried it, maybe I should look into it again
The state of Wayland compositors move fast, so support may be here. Last thing I’m waiting for is sway 1.12 that will bring HDR.
On a CRT monitor the difference between running at 60 Hz and even a just slightly better 72 Hz was night and day. Unbearable flickening vs a much better experience. I remember having some little utility for Windows that'd allow the display rate to be 75 (not 72 but 75). Under Linux I was writing modelines myself (these were the days!) to have the refresh rate and screen size (in pixels) I liked: I was running "weird" resolutions like 832x604 @ 75 Hz instead of 800x600 @ 60 Hz, just to gain a little bit more screen real estate and better refresh rate.
Now since monitors started using flat panels: I sure as heck have no idea if 60 fps vs 120 fps or whatever change anything for a "desktop" usage. I don't think the problem of the image fading too quickly at 60 Hz that CRT had is still present. But I'm not sure about it.
Gnome 3 seems similar to Unity nowadays, and it is pretty good.
I find it much easier to use than Windows or Mac, which is credit to the engineers who work on it.
I guess one of the few smaller things would be wayland, but this has so few features that you have to wonder why it is even used.
Those are all development tools. Has the runtime overhead grown proportionally, and what accounts for the extra weight?
As a side-note, that's how GC languages can perform so well in benchmarks. If you run benchmarks that generate huge amounts of garbage or consistently run the heap at 90%+ usage, that's when you'll see that orders of magnitude slowdown.
Oh also containers, lots more containerized applications on modern Linux desktops.
If a Java application requires an order of magnitude more memory than a similar C++ application, it's probably only superficially similar, and not only "because GC".
In Java, that's two objects, and one will be collected later.
What this means is that a C++ application running at 90% memory usage is going to use about the same amount of work per allocation/destruction as it would at 10% usage. The same IS NOT true for GC languages.
At 90% usage, each allocation and deallocation will be much more work, and will trigger collections and compactions.
It is absolutely true that GC languages can perform allocations cheaper than manual languages. But, this is only true at low amounts of heap usage. The closer you get to 100% heap usage, the less true this becomes. At 90% heap usage, you're gonna be looking at an order of magnitude of slowdown. And that's when you get those crazy statistics like 50% of program runtime being in GC collections.
So, GC languages really run best with more memory. Which is why both C# and Java pre-allocate much more memory than they need.
And, keep in mind I'm only referring to the allocation itself, not the allocation strategy. GC languages also have very poor allocation strategies, particularly Java where everything is boxed and separate.
If there's no opt-out, that's a different story.
If META's business model is not lucrative, is not my problem.
Given it's a field where you can put absolutely anything in (and probably randomize, if you want), how is this different than the situation today, where random sites ask you for your birthday (also unverified)? Moreover Meta already has your birthday. It's already mandated for account creation, so claims of "so they can earn money with users' preferences" don't make any sense.
https://www.theregister.com/2026/03/24/foss_age_verification...
Good luck when most libre users toss RH/Debian because of this and embrace GNU.
This is against HN guidelines: " Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
>The contents of the field will be protected from modification except by users with root privileges.
So... most users?
Most of the bloat these days is from containers and Canonical's approach to Ubuntu since ~2014 has been very heavy on using upstream containers so they don't have to actually support their software ecosystem themselves. This has lead to severe bloat and bad graphical theming and file system access.
Another one is memgaze, a program to vizualize linux process virtual memory spaces as RGB images and explore them using various binary visualization and sonification tools. Ie, you can just click a hilbert map of all processes then in the new window click around inside the image of that particular process' virtual ram and then listen to it interpreted as an 8bit wav, or find an extract images, for example. Or search for strings, run digraph analysis, etc. http://superkuh.com/memgaze-page.html
Or feeed.pl, my very quick and low resource usage feed reader for 1000+ feeds written in Perl/Gtk2 that is text only (no html, no images, etc). It is really handy for loading .opml files and finding and fixing broken feeds using the heuteristics I hard coded in to find feed urls. http://superkuh.com/blog/2025-09-13-2.html
These are a few I made 2025-26 that other people might care to use. But I have a lot more that just scratch my own particular itches. Like a Perl/Gtk2 version of MS Paint that interprets arbitrary loaded and painted images as sound, or the things that I use to monitor my ISP uptime/speed, etc.
And to say the desktop experience was more polished than what we have now is laughable. I remember that you couldn't have more than one application playing sound at the same time. At one point you had to manually configure Xfree86 to be aware that your mouse had a middle button. And good luck getting anything vaguely awkward like WiFi or suspend-to-ram working.
The Linux desktop is in a vastly better position now, even taking the Wayland mess into account.
First, it sounds like this 6gb requirement is more like a suggestion/recommendation than a requirement. I also am curious if it actually actively uses all 6gb. From my own usage of Linux over the years the OS itself isn't using that much ram, but the application is, which is almost always the browser.
Secondly. I haven't used Ubuntu desktop in years. So I have no real idea if this is something specific to them, but I do use Fedora, so I would imagine that the memory footprint cannot be too different. Whilst I could easily get away with <8gb ram, you really kind of don't want too if you're going to be doing anything heavier than web browsing or editing documents. Dev work? Or CAD, Design etc etc. But this isn't unique to Linux.
When they turned Centos into streams, I cut my workstation over to Ubuntu. It has been a reasonable replacement. Only real issues were when dual booting Win10 horked my grub and snap being unable to sort itself on occasion. When they release 26 as an LTS, I'm planning to update. You are spot on - the desktop itself is reasonably lean. 100+ tabs in Firefox... less so. Mind you, the amount of RAM in the workstations I'm using could buy a used car these days.
I believe Fedora and Ubuntu use about the same set of technologies: systemd, wayland, Gnome, etc. so it is about the same.
Apart from working out of the box I do not really know what those distros have and I don't. I just have to admit managing network interfaces is really easy in Gnome.
With the skyrocketing price of RAM this might finally be the year of the Linux desktop. But it is not gonna be Gnome I guess.
Can't move to Linux because it's Intel Atom and Intel P-state driver for that is borked, never fixed.
i remember this and had no idea that's why it would be doing that. thanks, i learned something today.
That is the case with every mainstream JS engine out there and is one of the many tradeoffs of this kind.
This is garbage writing. Linux’s advantages are numerous and growing. Ubuntu ≠ Linux. WRT RAM requirements, Win 11’s 4GB requirement isn’t viable for daily use and won’t represent any practical machine configuration that has the requisite TPM 2 module. On the other side, the Linux ecosystem offers a wide variety of minimal distributions that can run on ancient hardware.
Maybe I’m just grouchy today but I would flag this content if sloppy MS PR was a valid reason.
Apps are still a huge gap on Linux, but as an OS, I choose it every time over Windows and MacOS.
"With Desktop" has 1GB minimum and 2GB recommended - along with Pentium 4, 1GHz cpu.
This seems like a recommendation to just really get to the desktop itself + maybe some light usage. Anything more than that and the "recommendation" is fairly useless with the memory hog of apps that are commonly used.
I don't know how they have done it but with the latest updates to Windows 11 (In the Canary Channel, Optional 29000 series) it is VERY fast in Chrome, even on a PC with just 4GB Ram.
So all those mocking and laughing say you might just get a window up haven't a clue what they are talking about. It WAS terrible but now it is much much better. I don't know how they have done it but they have. It might be due to Microsoft's aim to rewrite the kernel in RUST and other things.
> Linux's advantage is slowly shrinking
Ubuntu is not Linux. Also, would love to see Windows running on 4GB.It says that Ubuntu increase the requirements not because of the OS itself but to have a better user experience when people have many browser tabs opened. Then it compares to Windows which has lower nominal requirements but higher requirements in practice to get a passable user experience.