Posted by LorenDB 12/6/2025
Phenomenal for those low powered servers you just want to leave on and running some tiny batch of cronjobs [1] or something for months or years at a time without worrying too much about wear on the SD card itself rendering the whole installation moot.
This is actually how I have powered the backend data collection and processing for [2], as I wrote about in [3]. The end result is a static site built in Hugo but I was careful to pick parts I could safely leave to wheedle on their own for a long time.
[1]: https://til.andrew-quinn.me/posts/consider-the-cronslave/
[2]: https://hiandrewquinn.github.io/selkouutiset-archive/
[3]: https://til.andrew-quinn.me/posts/lessons-learned-from-2-yea...
Before RPI existed, I always made filesystem images for USB sticks in NetBSD so that writes never touched "disk" ("diskless"). This allows me to remove the USB stick after boot, freeing up the slot for something else
BSD "install images" work this way
I have been using the RPi with a diskless NetBSD image since around 2012; there are no SD card writes, the userland is extracted into RAM
I can pull out the SD card after boot and use the slot for something else
If I want data storage, I connect an external drive
It's been wild to read endless online complaints from so-called "technical" RPi users for the last 13 years about SD card wear and tear
To me, it's another example of how it's possible to have a solution that is as old as the hills and have it be completely ignored in favor of a "modern" approach that is fatally-flawed
A lot of the SD-card wear issues come from people running “normal PC workflows” on a storage medium that was never designed for that pattern.
Something I’ve seen help many newcomers is simply enabling an overlay filesystem or tmpfs-based writes. It’s basically the middle ground between a full RAM-boot distro (piCore, Alpine diskless, NetBSD) and a standard SD-based Raspberry Pi OS.
You still get the normal ecosystem and docs, but almost no writes hit the card unless you explicitly commit them.
For anyone stuck between “I want something simple” and “I don’t want my SD to die,” overlays are the easiest win.
NetBSD and Tiny Core Linux, even with all their benefits, is a harder experience to get into if you haven't already dipped your toes into Linux, and doesn't have the same wide community and boundless online resources.
The point I'm making is that putting the rootfs on a memory filesystem, e.g., tmpfs, mfs, etc. avoids the problem with SD cards^1
This can be done with a variety of operating systems. IMO, the advantange of the RPi hardware is that it is supported by so many different operating systems
When I want to run additional, larger programs that are not in the rootfs I have embedded into the kernel, I either (a) run them from external storage or (b) copy them to the mfs/tmpfs
It depends on how much RAM I have available
1. There are probably other ways to avoid the problem, too
But, NetBSD ISOs are much heavier than TCL ISOs, and so while I'm sure there's a way to get just what I want working in diskless mode, I'm not confident I will have any RAM to run what I actually want to run on top of it.
https://www.digitalreviews.net/reviews/pc/norhtec-xcore-geck...
I've noticed Puppy is still around but I have no idea whether it can still be comparable to Tiny Core.
As compared to TC, the "out of the box" NetBSD images contain many things I wouldn't need, so customizing it has been a recurring thought, but oh well. The documentation and careful modularity is, obviously, a huge bonus of NetBSD in that regard (even an end-user like me could do some interesting modifications of the kernel solely by reading the manual). TC seems much more ad-hoc, but I assume this, too, is intentional, by design.
Around that time the NetBSD kernels with embedded rootfs filesystem I was making were around 17MB
Today, TCL is 23MB
The NetBSD kernels with embedded rootfs I'm using today are around 33MB
That size can be reduced of course
I don't monitor the boot process on RPi with serial console, I only connect after tinysshd is running, so I don't pay close attention to boot speed. It's fast enough
TCL appears to be aimed at users that prefer a binary distribution; also it provides GUI by default
I prefer to compile from source and I only use textmode hence NetBSD is more suitable for me than TCL
For someone who does not want to compile anything from source, it is possible to "customise" (replace) the rootfs of a NetBSD install image with another rootfs. It is not documented anywhere that I'm aware of but I have done it many times
I use a very minimal userland. I guarantee few if any HN readers would be satisfied with it. If I need additional programs I either (a) mount an external drive and run the programs from external storage, e.g., via chroot, or (b) copy them from an external drive into mfs or tmpfs
It depends on how much RAM I have
Though I don't explicitly load the entire userspace into RAM, since this is a laptop and I don't foresee a need to remove the SSD after boot.
So, running it on a Pi 5 CM in an IO board, there's no way to tell the Pi what device to boot from.
Yes, this is exactly what I want, except I need some simple node servers running, which is not so ultra light. Would you happen to know, if this still all works within the ram out of box, or does this require extra work?
You can run nodejs fine on a pi with "Raspberry Pi OS Lite". In the configs, look for "Overlay File System" and enable it on the boot partition and main partition. The pi will boot from the sd card and run entirely in ram.
Be sure to run something to clear your logs occasionally or reboot once in a while or you'll run out of RAM. Still, get a quality sd card and power supply. You can get years out of a setup like this.
I also like SliTaz: http://slitaz.org/en, and Slax too: https://www.slax.org/
Oh and puppy Linux, which I could never get into but was good for live CDs: https://puppylinux-woof-ce.github.io/
And there's also Alpine too.
The most responsive one, unexpectedly, was Raspberry Pi OS.
I carefully put a fairly minimal Xfce setup on it instead of LXDE and RAM usage doubled. It's impressively hand crafted and pruned.
Sadly, though, it hasn't been updated since Debian 11.
It will increase the size of the VM but the template would be smaller than a full blown OS
Aside from dev containers, what are other options? I'm not able to run intellij on my laptop, is not an option
I use Nvim to ssh into my computer to work, which is fine. But really miss the full capacity of intellij
In my experience, by the time you’re compiling and running code and installing dev dependencies on the remote machine, the size of the base OS isn’t a concern. I gained nothing from using smaller distros but lost a lot of time dealing with little issues and incompatibilities.
This won’t win me any hacker points, but now if I need a remote graphical Linux VM I go straight for the latest Ubuntu and call it day. Then I can get to work on my code and not chasing my tail with all of the little quirks that appear from using less popular distros.
The small distros have their place for specific use cases, especially automation, testing, or other things that need to scale. For one-offs where you’re already going to be installing a lot of other things and doing resource intensive work, it’s a safer bet to go with a popular full-size distro so you can focus on what matters.
I'm all for suggestions for a better base OS in small docker containers, mostly to run nginx, php, postgress, mysql, redis, and python.
> Alpine uses musl instead of glibc for the C standard library. This has caused me all types of trouble in unexpected places.
I have no experience with alternative C libs. Can you share some example issues?https://purplecarrot.co.uk/post/2021-09-04-does_alpine-resol...
No precompiled Linux stuff runs. No Chrome, no 3rd party Electron apps work unless specifically ported. For me, no Slack, no Panwriter, no Ferdium.
Flatpak works, sort of, with restrictions. Snap doesn't.
Question, I use VirtualBox, but I feel it's kind a laggy sometimes, What do you use? Any suggestion on performance improvements?
Never really got what it’s for.
It'd be best with hardwired network though.
thank you for this reminder! I had completely forgotten about SliTaz, looks like I need to check it out again!
In what way? Do you mean you didn't get the chance to use it much, or something about it you couldn't abide?
I used both the FLTK desktop (including my all-time favorite web browser, Dillo, which was fine for most sites up to about 2018 or so) and the text-only mode. TC repos are not bad at all, but building your own TC/squashfs packages will probably become second nature over time.
I can also confirm that a handful of lenghty, long-form radio programs (a somewhat "landmark" show) for my Tiny Country's public broadcasting are produced -- and, in some cases, even recorded -- on either a Dell Mini 9 or a Thinkpad T42 and Tiny Core Linux, using the (now obsolete?) Non DAW or Reaper via Wine. It was always fun to think about this: here I am, producing/recording audio for Public Broadcasting on a 13+ year old T42 or a 10 year old Dell Mini netbook bought for 20€ and 5€ (!) respectively, whereas other folks accomplish the exact same thing with a 2000€ MacBook Pro.
It's a nice distro for weirdos and fringe "because I can" people, I guess. Well thought out. Not very far from "a Linux that fits inside a single person's head". Full respect to the devs for their quiet consistency - no "revolutionary" updates or paradigm shifts, just keeping the system working, year after year. (FLTK in 2025? Why not? It does have its charm!) This looks to be quite similar to the maintenance philosophy of the BSDs. And, next to TC, even NetBSD feels "bloated" :) -- even though it would obviously be nice to have BSD Handbook level documentation for TC; then again, the scope/goal of the two projects is maybe too different, so no big deal. The Corebook [1] is still a good overview of the system -- no idea how up-to-date it is, though.
All in all, an interesting distro that may "grow on you".
Booting a dedicated, tiny OS with no distractions helped me focus. Plus since the home directory was a FAT32 partition, I could access all my files on any machine without having to boot. A feature I used a lot when printing assignments at the library.
Before encryption by default, get files from windows for family when they messed up their computers. Or change the passwords.
Before browser profiles and containers I used them in VMs for different things like banning, shopping, etc.
Down to your imagination really.
Not too mention just to play around with them too.
Or 128K of ram and 400 kb disk for that matter.
The "high color" (16 bit) mode was 5:6:5 bits per channel, so 16 bits per pixel.
> So 153,600 bytes for the frame buffer.
And so you're looking at 614.4 KB (600 KiB) instead.
To be frank, I wasn't aware such a mode was a thing, but it makes sense.
In 1985, and with 512K of RAM. It was very usable for work.
Games used either 320h or 640h resolutions, 4 bit or fake 5 bit known as HalfBrite, because it was basically 4 bit with the other 16 colors being same but half brightness. The fabled 12-bit HAM mode was also used, even in some games, even for interactive content, but it wasn't too often.
Btw I was a teenager when those Denthor trainers came out and I read them all, I loved them! They taught me a lot!
For example, NVIDIA GPU drivers are typically around 800M-1.5G.
That math actually goes wildly in the opposite direction for an optimization argument.
They also pack in a lot of game-specific optimizations for whatever reason. Could likely be a lot smaller without those.
The EGA (1984) and VGA (1987) could conceivably be considered a GPU although not turning complete. EGA had 64, 128, 192, or 256K and VGA 256K.
The 8514/A (1987) was Turing complete although it had 512kB. The Image Adapter/A (1989) was far more powerful, pretty much the first modern GPU as we know them and came with 1MB expandable to 3MB.
The PGC was kind of a GPU if you squint a bit. It didn't work the way a modern GPU does where you've got masses of individual compute cores working on the same problem, but it did have a processor roughly as fast as the host processor that you could offload simple drawing tasks to. It couldn't do 3D stuff like what we'd call a GPU today does, but it could do things like solid fills and lines.
In today's money the PGC cost about the same as an RTX PRO 6000, so no-one really had them.
WTF? Tell me more!
I have one, but I have no matching screen so I never tried it... Maybe it's worth finding a converter.
That said, OSs came with a lot less stuff then.
True. And it's still around. It's FOSS now, runs natively on a Raspberry Pi 1-400 and Zero, and has Wifi, IPv6, and a Webkit browser.
Sure we could go back... Maybe we should. But there are lots of stuff we take for granted to day that were not available back then.
It's hinted at in this tutorial, but you'd have to go through the Programmer's Reference Manual for the full details: https://www.stevefryatt.org.uk/risc-os/wimp-prog/window-theo...
RISC OS 3.5 (1994) was still 2MB in size, supplied on ROM.
P.S. I should probably mention that there wasn't room in the ROM for the vector fonts; these needed to be loaded from some other medium.
No ssl, probably so you can access that site on the browser
Windows 3.1 was only something like 16MB of storage.
Imagine the Cray supercomputer in those days being used to run a toaster or doorbell…
I prefer to use additional RAM and disk for data not code
Probably not due to DMA buffers. Maybe a headless machine.
But would be funny to see.
If you were someone special, you got 1024x768.
Or 32K of RAM and 64KB disk for that matter.
What's your point? That the industry and what's commonly available gets bigger?
It's 20 years later and I've been running Linux for most of that time, so I probably would have even more fun revisiting DSL and Tiny Core Linux.
They did.
https://www.theregister.com/2024/02/14/damn_small_linux_retu...
I don’t think that had the X Windows system. https://web.archive.org/web/19991128112050/http://www.qnx.co... and https://marc.info/?l=freebsd-chat&m=103030933111004 confirm that. It ran the Photon microGUI Windowing System (https://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx....)
Some businesses stick with markets they know, as non-retail customer revenue is less volatile. If you enter the consumer markets, there are always 30k irrational competitors (likely with 1000X the capital) that will go bankrupt trying to undercut the market.
It is a decision all CEO must make eventually. Best of luck =3
"The Rules for Rulers: How All Leaders Stay in Power"
Stuff that is better designed and implemented usually costs money and comes with more restrictive licenses. It’s written by serious professionals later in their careers working full time on the project, and these are people who need to earn a living. Their employers also have to win them in a competitive market for talent. So the result is not and cannot be free (as in beer).
But free stuff spreads faster. It’s low friction. People adopt it because of license concerns, cost, avoiding lock in, etc., and so it wins long term.
Yes I’m kinda dissing the whole free Unix thing here. Unix is actually a minimal lowest common denominator OS with a lot of serious warts that we barely even see anymore because it’s so ubiquitous. We’ve stopped even imagining anything else. There were whole directions in systems research that were abandoned, though aspects live on usually in languages and runtimes like Java, Go, WASM, and the CLR.
Also note that the inverse is not true. I’m not saying that paid is always better. What I’m saying is they worse is free, better was usually paid, but some crap was also paid. But very little better stuff was free.
Conversly i remenber Maya or Autodesk used to have a bounty program for whoever would turn in people using unlicensed/cracked versions of their product.Meanwhile Blender (from a commercial past) kept their free nature and have connsistently grown in popularity and quality without any such overtures.
Of course nowadays with Saas everything get segmented into wierd verticals and revenue upsells are across the board with the first hit usually also being free.
They turned into legal-service-firms along the way, and stopped real software development/risk at some point in 2004.
These firms have been selling the same product for decades. Yet once they get their hooks into a business, few survive the incurred variable costs of the 3000lb mosquito. =3
In *nix, most users had a rational self-interest to improve the platform. "All software is terrible, but some of it is useful." =3
They were expensive too. You had to pay for each device driver you used.
« QNX DEMO disk
Extending possibilities and adding undocumented features »
I don't know if there are any other options for older machines other than stripped down Linux distros.
It's documentation is a free book : http://www.tinycorelinux.net/book.html
[1] https://wiki.tinycorelinux.net/doku.php?id=dcore:welcome
Download from at least one more location (like some AWS/GCP instance) and checksum.
Download from the Internet Archive and checksum:
https://web.archive.org/web/20250000000000*/http://www.tinyc...
EDIT: nevermind, I see that it has the md5 in a text file here: http://www.tinycorelinux.net/16.x/x86/release/
https://distro.ibiblio.org/tinycorelinux/downloads.html
And all the files are here
https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/
Under a HTTPS connection. I am not at a terminal to check the cert with OpenSSL.
I don’t see any way to check the hash OOB
Also this same thing came up a few years ago
https://www.linuxquestions.org/questions/linux-newbie-8/reli...
> this same thing came up a few years ago
Honestly, that makes this inexcusable. There are numerous SSL providers available for free, and if that’s antithetical to them, they can use a self signed certificate and provide an alternative method of verification (e.g. via mailing list). The fact they don’t take this seriously means there is 0 chance I would install it!
Honestly, this is a great use for a blockchain…
Are any distros using block chain for this ?
I am used to using code signing with HSMs
> are any sisters using blockchain
I don’t think so, but it’s always struck me as a good idea - it’s actual decentralised verification of a value that can be confirmed by multiple people independently without trusting anyone other than the signing key is secure.
> I am used to code signing with HSMs
Me too, but that requires distributing the public key securely which… is exactly where we started this!
> for extra high security,
No, sending the hash on a mailing list and delivering downloads over https is the _bare minimum_ of security in this day and age.
And all the files are here https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/
I posted that above in this thread.
I will add that most places, forums, sites don’t deliver the hash OOB. Unless you mean like GPG but that would have came from same site. For example if you download a Packer plugin from GitHub, files and hash all comes from same site.
This thread started by talking about the site serving the download (and hash) over http. Github serves their content over https, so you're not going to be MITM'ed. There are other attack vectors, but if the delivery of the content you're downloading is compromised/MITM'ed, you've lost.