On Linux, if you just run SDDM that launches xfce, you will quickly OOM the system, because SDDM will stay in memory. The same goes for most desktop managers. So the real way is to just `startx` your desktop environment directly and use console login.
i3 is the best call for usability/a modern feeling, with extremely low memory usage. The reasoning is that, if you're used to sway or i3, this will feel like home, and it has all the features you need to be productive. Anything else will eat more RAM, from what I've tried. It also feels really fast, even if your GPU is struggling, because there are no animations and very little movement.
I would personally recommend Alpine, as it really comes with nothing extra. You can then install a desktop environment manually (or use their setup-desktop script if you have plenty of RAM and storage). TinyCore is a bit too wild to do modern computing on; the paradigms are just too outdated, the installation is a bit of a pain, and the installer would OOM on the same system where I can run my entire i3 alpine setup.
DSL seems cool, I haven't tried it; I just wanted to share my experience.
You can try all of this by setting up a qemu VM. Be aware that you will need more RAM just for the BIOS, so maybe if you configure 210MB, youll end up with around 128 usable, or so. Your OS will report how much is usable, accurately.
You can then set CPU cores with usage limits, limit HDD speeds to 2000's HDD speeds (so that your swap isnt fast), and so on. Its a fun exercise to try to develop software on such a limited machine, and its fun to see which software launches and which doesn't.
*: the browser is an issue. Firefox is the preferrable option, but wouldn't launch on so little RAM. NetSurf or elinks/lynx etc. is the way to go, but websites that rely on JS to work, like any and all Rust documentation sites, will be completely unusable.
Current version clocks in ~700MB, again very small when compared to any modern Linux installation media.
On the other hand, it seems like DSL takes a more extreme approach to slimming down i3/XFCE route, plus DSL contains Dillo which is arguably the latest modern-ish (to the most extent possible) and lightest browser in existence.
Edit: https://www.brow.sh/
Note that this does present a bit of a man-in-the-middle scenario, and Opera's chief income is from advertising (and "query").
~/.config/mpv/config:
#start
ytdl-format=bestvideo[height<=?480][fps<=?30]+bestaudio/best
ao=sndio
vo=gpu,xv
audio-pitch-correction=no
quiet=yes
pause=no
profile=fast
vd-lavc-skiploopfilter=all
#demuxer-cache-wait=yes
#demuxer-max-bytes=4MiB
#end
~/yt-dlp.conf #start
--format=bestvideo[height<=?480][fps<=?30]+bestaudio/best
#endCookies setup for HN:
~/.dillo/cookiesrc
news.ycombinator.com ACCEPT
.news.ycombinator.com ACCEPT_SESSION
hn.algolia.net ACCEPT
.hn.algolia.net ACCEPT_SESSION
hn.algonia.net ACCEPT
.hn.algonia.net ACCEPT_SESSION
- Use ZRAM: sudo modprobe zram
sudo zramctl --find-size 64MB
sudo swapon /dev/zram0 -p 99
- Tiling sucks on small resolutions. Use CWM or IceWM.- XTerm it's very small and works fine. I can posts Xresorces there.
"Back in the day" people were running HP technical workstations, with X11 with CDE, with 128MB RAM, on Pentium-II equivalent speed CPUs - and they liked it!
Kids these days...
Server side rendering will collect(steal) personal info, it is a no go. The only right solution is online services to provide a web site on the side of a whatng cartel web app, if the online service does fit ofc. No other way, and hardcore regulation is very probably required.
I remember having tested it, but can't remember what it was like :) -- at least it didn't make me switch from Tiny Core Linux, which I've used extensively. From a superficial, distro-hopper view, DSL, Puppy, EasyOS and Tiny Core all feel quite similar, I guess.
As a side note, it is interesting to see DSL and TC on the HN front page in two consecutive days of 2025. Both are very old projects; I wonder what's the impulse behind this current interest.
It's kind of sad to hear "adult" people claim in all seriousness that it's reasonable that a kernel alone spends more memory than the minimum requirement for running Windows 95, the operating system with kernel, drivers, a graphical user interface and even a few graphical user-space applications.
It is not one factor but the size of a single bitmap of the screen is certainly an issue.
This is like car guys today bemoaning the simpler carburetor age or the car guys before them bemoaning the model T age of simplicity. Its silly.
There will never be a scenario where you need all this lightweight stuff outside of extreme edge cases, and there's SO MUCH lightweight stuff its not even a worry.
Also its funny you should mention win95 because I suspect that reflects your age, but a lot of people here are from the dos/first mac/win 2.0 age, and for that crowd win95 was the horrible resource pig and complexity nightmare. Tech press and nerd culture back then was incredibly anti-95 for 'dumbing it all down' and 'being slow' but now its seen as the gold standard of 'proper computing.' So its all relative.
The way I see hardware and tech is that we are forced to ride a train. It makes stops but it cannot stop. It will always go to the next stop. Wanting to stay at a certain stop doesn't make sense and as in fact counter-productive. I wont go into this, but linux on the desktop could have been a bigger contender if the linux crowd and companies were willing to break a lot of things and 'start over' to be more competitive with mac or windows, which at he time did break a lot of things and did 'start over' to a certain degree.
The various implementations of linux desktop always came off clunky and tied to unix-culture conventions which dont really fit the desktop model, which wasn't really appealing for a lot of people, and a lot of that was based on nostalgia and this sort of idealizing old interfaces and concepts. I love kde but its definitely not remotely as appealing as win11 or macos gui and ease of use.
In other words, when nostalgia isn't pushed back upon, we get worse products. I see so much unquestionable nostalgia in tech spaces, I think its something that hurts open source projects and even many commercial ones.
however this is of course easier said than done
I think there are many.
Some examples:
* The fastest code is the code you don't run.
Smaller = faster, and we all want faster. Moore's law is over, Dennard scaling isn't affordable any more, smaller feature sizes are getting absurdly difficult and therefore expensive to fab. So if we want our computers to keep getting faster as we've got used to over the last 40-50 years then the only way to keep delivering that will be to start ruthlessly optimising, shrinking, finding more efficient ways to implement what we've got used to.
Smaller systems are better for performance.
* The smaller the code, the less there is to go wrong.
Smaller doesn't just mean faster, it should mean simpler and cleaner too. Less to go wrong. Easier to debug. Wrappers and VMs and bytecodes and runtimes are bad: they make life easier but they are less efficient and make issues harder to troubleshoot. Part of the Unix philosophy is to embed the KISS principle.
So that's performance and troubleshooting. We aren't done.
* The less you run, the smaller the attack surface.
Smaller code and less code means fewer APIs, fewer interfaces, less points of failure. Look at djb's decades-long policy of offering rewards to people who find holes in qmail or djbdns. Look at OpenBSD. We all need better more secure code. Smaller simpler systems built from fewer layers means more security, less attack surface, less to audit.
Higher performance, and easier troubleshooting, and better security. There's 3 reasons.
Practical examples...
The Atom editor spawned an entire class of app: Electron apps, Javascript on Node, bundled with Chromium. Slack, Discord, VSCode: there are multiple apps used by tens to hundreds of millions of people now. Look at how vast they are. Balena Etcher is a, what, nearly 100 MB download to write an image to USB? Native apps like Rufus do it in a few megabytes. Smaller ones like USBimager do it in hundreds of kilobytes. A dd command in under 100 bytes.
Now some of the people behind Atom wrote Zed.
It's 10% of the size and 10x the speed, in part because it's a native Rust app.
The COSMIC desktop looks like GNOME, works like GNOME Shell, but it's smaller and faster and more customisable because it's native Rust code.
GNOME Shell is Javascript running on an embedded copy of Mozilla's Javascript runtime.
Just like dotcoms wanted to dis-intermediate business, remove middlemen and distributors for faster sales, we could use disintermediation in our software. Fewer runtimes, better smarter compiled languages so we can trap more errors and have faster and safer compiled native code.
Smaller, simpler, cleaner, fewer layers, less abstractions: these are all goods things which are desirable.
Dennis Ritchie and Ken Thompson knew this. That's why Research Unix evolved into Plan 9, which puts way more stuff through the filesystem to remove whole types of API. Everything's in a container all the time, the filesystem abstracts the network and the GUI and more. Under 10% of the syscalls of Linux, the kernel is 5MB of source, and yet it has much of Kubernetes in there.
Then they went further, replaced C too, made a simpler safer language, embedded its runtime right into the kernel, and made binaries CPU-independent, and turned the entire network-aware OS into a runtime to compete with the JVM, so it could run as a browser plugin as well as a bare-metal OS. Now we have ubiquitous virtualisation so lean into it: separate domains. If your user-facing OS only runs in a VM then it doesn't need a filesystem or hardware drivers, because it won't see hardware, only virtualised facilities, so rip all that stuff out. Your container host doesn't need to have a console or manage disks.
This is what we should be doing. This is what we need to do. Hack away at the code complexity. Don't add functionality, remove it. Simplify it. Enforce standards by putting them in the kernel and removing dozens of overlapping implementations. Make codebases that are smaller and readable by humans.
Leave the vast bloated stuff to commercial companies and proprietary software where nobody gets to read it except LLM bots anyway.
https://web.archive.org/web/20100520020401/http://therealedw...
I was going to comment that it must have been posted multiple times before 2024, but this is a refresh of the older distro, there are probably different URLs. I'm not sure what's new about it to warrant a post today, the last release is rc7 from June 2024 and the webpage is full of popup ads that are really annoying.
Perhaps someone discovered it for the first time today? If so, this used to be much smaller. 50MB vs 700MB today. I mean, it's a damn small linux that includes Firefox... that doesn't seem quite right to me.
I think the spiritual (and actual) successor to DSL is http://tinycorelinux.net . Which was also discussed here two days ago: https://news.ycombinator.com/item?id=46173547 .
The size was a 90s problem.
Well, technically the eee is '07. But it is 32bit and everything that entails.
It's Firefox, Dillo, Links2 and Netsurf GTK :)
Dillo is something I'd love to daily drive like I did 20 years ago, but it would just fail on most modern websites. But it's what, 2MB in total (binary+libraries)?
Links2 is text terminal oriented. No modern browser can do that natively at all. All competition is even smaller (w3m, lynx). Plus links2 can run in graphics mode, even on a framebuffer, so you can run it without X server at all.
So Fx is the only "general purpose" browser on that list, but is just too big for old hardware.
It worked great back then, I'm sure it works even better now.
There are also some charities that ship old PCs to Africa, install a small Linux distro on them, e.g.:
When I lived in London I helped clients donate a lot of kit to ComputerAid International:
And what's now Computers4Charity: