In the mid 1990's the hardware driver support on Linux was much broader.
Copy / paste of my comment from last year about FreeBSD
I installed Linux in fall 1994. I looked at Free/NetBSD but when I went on some of the Usenet BSD forums they basically insulted me saying that my brand new $3,500 PC wasn't good enough.
The main thing was this IDE interface that had a bug. Linux got a workaround within days or weeks.
https://en.wikipedia.org/wiki/CMD640
The BSD people told me that I should buy a SCSI card, SCSI hard drive, SCSI CD-ROM. I was a sophomore in college and I saved every penny to spend $2K on that PC and my parents paid the rest. I didn't have any money for that.
The sound card was another issue.
I remember software based "WinModems" but Linux had drivers for some of these. Same for software based "Win Printers"
When I finally did graduate and had money for SCSI stuff I tried FreeBSD around 1998 and it just seemed like another Unix. I used Solaris, HP-UX, AIX, Ultrix, IRIX. FreeBSD was perfectly fine but it didn't do anything I needed that Linux didn't already do.
That's pretty much it. A lot of the people I see using a BSD these days do so because they always have and they prefer what they know, which is fine, or they just want to be contrarian.
Realistically, aside from edge cases in hardware support, you can do anything you want on any modern *nix. There's not even as much of a difference between distros as people claim. All the "I want an OS that gets out of my way" and similar reasons apply to most modern well-maintained distros these days. It's more personality and familiarity than anything objective.
Took me a while to settle on Alpine after trying Arch and Void, but I can't imagine why I would ever leave unless they change something drastic.
I broadly agree, even as a FreeBSD fan myself; things have converged a lot over the decades. But still, I generally feel that while you can get the same work done in both, FreeBSD does things better (and/or cleaner, more elegant, etc) in many cases.
The overall feeling of system cohesion makes me happier to use it, from small things like Ctrl-T producing meaningful output for all the base OS tools, to larger and more amorphous things like having greater confidence core systems won't change too quickly over time (eg: FreeBSD's relatively stable sound support, versus Linux's alsa/pulse/pipewire/..., similar for event APIs, and more).
Though I totally feel your pain about latest-and-greatest hardware driver support. Has gotten better since the '90s, but that gap will probably always be there due to the different development philosophies.
I hope FreeBSD never gets too "Linux-y"; it occupies it's own nice spot in the spectrum of available options.
Big chuckle there, so good. Hey, at least they had a sense of humour.
But I agree the hardware support could be much better even to this day.
The Linux community felt like college students with no job and not much money. That included Linus Torvalds himself who developed the kernel while in college and wasn't rich. DEC basically gave Linus an Alpha to get him to port the kernel to it.
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things: [...] Somehow we ended up with an overengineered mess of leaky abstractions
Not sure I like the value judgement here. I think it's more of a consequence of Linux' success. I am convinced that if it was reversed (Linux was niche and *BSD the norm), then a ton of abstractions would come, and the average user would "use an overengineered mess" because they don't know better (or don't care or don't have a need to care).
Not that I like it when people ship their binary in a 6G docker image. But I don't think it's fair to put that on "those Linux engineers".
And containers really are a VM-light, so you might as well use the real thing, in fact, VMWare for a long time thought that their images would be a container like thing and many larger installations used them as such.
On the other hand, I don't think the comparison between jails and docker is fair. What made Docker popular is the reusability of the containers, certainty not the sandboxing which in the early days was very leaky.
Inability to find a service I want to run on Github and 95+% of the time to be able to configure it and have it running and fully managed with usually just a one-liner shell script like 10 minutes later just by finding an existing docker image is the thing I’d lose with jails. That’s all of the value of docker to me personally. Jails could be a building block toward that, but last I checked there’s no deep and up-to-date library of “packages” I can reach for, using jails, which makes it pretty much useless to me.
1: I have like eight or nine services running on my home Debian system, they all auto-restart and come back up on reboot, and I’ve not had to touch Systemd once on that machine.
Well, what style difference exactly? GNU utils tend to be more verbose. Other than that, what is the difference in style?
It might be of homogeneity, i.e. the FreeBSD tools behave in a consistent way, while there are significant differences between the Linux tools, depending on which were the opinions of their particular authors about how the traditional UNIX tools should be changed.
For instance, at some point in time, long ago, in Linux the traditional "ifconfig" and a few related commands have been replaced by "ip", for managing networking.
The Linux "ifconfig" needed an upgrade, as it could do only a small fraction of what the FreeBSD "ifconfig" could do. Nevertheless, until today, decades later, I have been unable to stop hating the Linux "ip".
I cannot say why, because in other cases when some command-line or GUI utility that I had used for many years was replaced by an alternative I instantly recognized that the new UI was better and I never wanted to use the old UI again.
So while both FreeBSD and Linux have started with the same traditional UNIX utilities, they have evolved divergently and now they frequently feel quite differently, in the sense that the various options in commands or in configuration files may match your expectations only when taking into account the identity of the OS. Overall FreeBSD has been more conservative, but there are also cases when it has made bigger changes, but such changes seem more carefully planned and less haphazard than in the Linux world.
First it's important to clarify "containers" are not an abstraction in the linux kernel. Containers are really an illusion achieved by use of a combination of user/pid/networking namespaces, bind mounts, and process isolation primitives through a userspace application(s) (podman/docker + a container runtime).
OCI container tooling is much easier to use, and follows the "cattle not pets" philosophy, and when you're deploying on multiple systems, and want easy updates, reproducibility, and mature tooling, you use OCI containers, not LXC or freebsd jails. FreeBSD jails can't hold a candle to the ease of use and developer experience OCI tooling offers.
> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things.
This was an intentional design decision, and not a bad one! cgroups, namespaces, and seccomp are used extensively outside of the container abstraction. (See flatpak, systemd resource slices, firejail). By not tieing process isolation to the container abstraction, we can let non-container applications benefit from them. We also get a wide breadth of container runtime choices.
I still see FreeBSD as being great for things like networking devices and storage controllers. You can apply a lot of the "cattle vs pets" design one level above that using VMs and orchestration tools.
Spawning a linux container is much simpler and faster than spawning a freebsd jail.
I don’t know why i keep hearing about jails being better, they clearly aren’t.
> I don’t know why i keep hearing about jails being better
Jails have a significantly better track record in terms of security.
I can delegate a ZFS dataset to a jail to let the jail manage it.
Do Linux containers have an equivalent to VNET jails yet? With VNET jails I can give the jail its own whole networking stack, so they can run their own firewall and dhcp their own address and everything.
https://freebsdfoundation.org/blog/oci-containers-on-freebsd/
https://www.freshports.org/sysutils/podman-compose/FreeBSD jails were technically solid years before Docker existed, but the onboarding story was rough. You needed to understand the FreeBSD base system first. Docker let you skip all of that.
That said, I've been seeing more people question the container stack complexity recently. Especially for smaller deployments where a jail or even a plain VM with good config management would be simpler and more debuggable. The pendulum might be swinging back a bit for certain use cases.
But it's not a competition. FreeBSD does its thing and Linux does another. That's why I use FreeBSD.
- Stable OS coupled with rolling packages. I am on the previous FreeBSD version (14.3-RELEASE, while 15 is out) but I have the very latest KDE.
- A ports collection where you can recompile packages whenever you're not happy with the default settings. Strict separation between packages and core OS. Everything that is from packages is in /usr/local (and this separation is also what makes the above point possible).
- ZFS on root as first-class citizen. Really great. It has some really nice side tooling too like sanoid/syncoid and bectl (the latter is part of the core OS even).
- jails for isolation (I don't really use it like docker for portability and trying things out)
- Clear documentation because there are no different distros. Very good handbook. I like the rc.conf idea.
- Common sense mentality, not constantly trying to reinvent the wheel. I don't have to learn a new init system and I can still use ifconfig. Things just work without constantly being poked around.
- Not much corporate messing around with the OS. Most of the linux patches come from big tech now and are often related to their goals (e.g. cloud compatibility). I don't care about those things and I prefer something developed by and for users, not corporate suits. No corporates trying to push their IP onto the users (e.g. canonical with their Mir, snaps etc)
- Not the same thing as everyone else has. I'm not a team player, I hate going with the flow. I accept that sometimes comes with stuff to figure out or work around.
I think that's about it.
I don't use it full time, only in a VM, but all these things sound positive to me.
I had to do 'bonded' interfaces on Debian the other day. It's what, 5 different config files depending on which 'network manager' you use. In FreeBSD it's 5 lines in /etc/rc.conf and you're done.
And don't even get me started on betting which distribution (ahem CentOS) will go away next.
Ubuntu is the disaster Linux distro, I won’t touch Ubuntu if I have any other option.
In LTS environments where I need to upgrade OS's, FreeBSD is a no-brainer.
I laughed out loud, there is no in-place upgrade mechanism for that in those distros and never has been, that is the nature of those distros. They release patch/security updates until they go EOL, which is measured in units closer to decades than years.
I don’t have a problem with BSDs. That’s cool you like upgrading in place.
The best and most laugh-inducing part of your whole point is that centos now not only allows you to do in-place upgrades, that’s the whole fucking point.
I'm using either Docker Compose or Docker Swarm without Kubernetes, and there's not that much of it, to be honest. My "ingress" is just an Apache2 container that's bound to 80/443 and my storage is either volumes or bind mounts, with no need for more complexity there.
> The jails vs containers framing is interesting but I think it misses why Docker actually won. It wasn't the isolation tech. It was the ecosystem: Dockerfiles as executable documentation, a public registry, and compose for local dev. You could pull an image and have something running in 30 seconds without understanding anything about cgroups or namespaces.
So where's Jailsfiles? Where's Jail Hub (maybe naming needs a bit of work)? Where's Jail Desktop or Jail Compose or Jail Swarm or Jailbernetes?
It feels like either the people behind the various BSDs don't care much for what allowed Docker to win, or they're unable to compete with it, which is a shame, because it'd probably be somewhere between a single and double digit percent userbase growth if they decided to do it and got it right. They already have some of the foundational tech, so why not the UX and the rest of it?
Even if "jailsfiles" were created the ecosystem would need to start from scratch and sometimes it feels like people in the FreeBSD ecosystem have a hard enough time keeping ports somewhat up to date, let alone create something new.
Luckily Podman seems to support FreeBSD these days for docker images, but the Linux emualation might be a bit of a blocker so not a 100% solution.
On the outside. But that's a lot of complexity hidden from view there, easily a couple of million lines of code on top of the code that you wrote.
I did some looking around, and I see that ocijail is a thing, so that's probably what I was looking for.
(edited, sorry, I didn't see your reply)
The link literally uses the term ecosystem. Several times actually.
Many Linux syscalls are unemulated and things like /proc/<pid>/fd/NN etc are not "magic symlinks" like on Linux so execve on them fails (i.e there is rudimentary /proc support, it's not full fleshed out).
TL;DR Linux containers on FreeBSD via the podman + linuxulator feel half baked.
For example, try using the alpine container... `apk upgrade` will fail due to the /proc issue discussed above. Try using the Fedora container `dnf upgrade` will fail due to some seccomp issue.
The future of containers on FreeBSD is FreeBSD OCI containers, not (emulated) Linux containers. As an aside, podman on FreeBSD requires sudo which kinda defeats the concept but hopefully this will be fixed in the future.
Fixed that for you ;)
But somehow Linux still took over my personal and professional life.
Going back seems nice but there need to be a compelling reason -docker is fine, the costs don’t add up any more. I do t have a real logical argument beyond that.
However, in 2003 Intel introduced CPUs with SMT and in 2005 AMD introduced multi-core CPUs.
These multi-threaded and/or multi-core CPUs quickly replaced the single-threaded CPUs, especially in servers, where the FreeBSD stronghold was.
FreeBSD 4 could not handle multiple threads. In the following years Linux and Windows have been developed immediately to take advantage of multiple threads and cores, while FreeBSD has required many years for this, a time during it has become much less used than before, because new users were choosing Linux and some of the old users were also switching to Linux for their new computers that were not supported by FreeBSD.
Eventually FreeBSD has become decent again from the PoV of performance, but it has never been again in a top position and it lacks native device drivers for many of the hardware devices that are supported by Linux, due to much fewer developers able to do the necessary reverse engineering work or the porting work for the case when some company provides Linux device drivers for their hardware.
For the last 3 decades, I have been using continuously both FreeBSD and Linux. I use Linux on my desktop PCs and laptops, and in some computational servers where I need software support not available for FreeBSD, e.g. NVIDIA CUDA (NVIDIA provides FreeBSD device drivers for graphic applications, but not CUDA). I continue to use FreeBSD for many servers that implement various kinds of networking or storage functions, due to exceptional reliability and simplicity of management.
The real difference during that early 00s was that momentum bought 2 things that made FreeBSD a worse choice (and made even more people end up using Linux):
1: "commercial" support for Linux, firstly hardware like you mentioned, but in the way that you could buy a server with some Linux variant installed and you knew that it'd run, unless you're an CTO you're probably not risking even trying out FreeBSD on a fresh machine if time isn't abundant.
Also software like Java servers comes to mind, came with binaries or was otherwise easy to get running on Linux, and even with FreeBSD's Linux layer VM's things like JVM and CLR often relied on subtle details that makes it incompatible with the Linux layer (tried running .NET a year or two ago, ran into random crashes).
2: a lot of "fresh" Linux developers had a heavy "works on my machine" mentality, being reliant on Linux semantics, paths or libraries in makefiles (or dependencies on things like systemd)
Sure there is often upstream patches (eventually) or patches in FreeBSD ports, those last are good for stable systems, but a PITA in the long run since stuff doesn't get upstreamed properly and you're often stuck when there is a major release and need to figure out how to patch a new version yourself.
I'm sure some people have a sunk-cost feeling with Linux and will get defensive of this, but ironically this was exactly the argument I had heard 20 years ago - and I was defensive about it myself then.. This has only become more true though.
It's really hard to argue against Linux when even architecturally poor decisions are papered over by sheer force of will and investment; so in a day-to-day context Linux is often the happy path even though the UX of FreeBSD is more consistent over time.
Which surely says something about all these ideological purity tests
People who don't understand shit about how the system behaves and are comfortable with that. "I install a package, I hit the button, it works"
.. and
People who understand very deeply how computers work, and genuinely enjoy features of the NT Kernel, like IOCP and the performance counters they offer to userland.
What's weird to me is that the competence is bimodal; you're either in the first camp or the second. With Linux (+BSD/Solaris etc;) it's a lot more of a spectrum.
I've never understood exactly why this is, but it's consistent. There's no "middle-good" Windows developer.
The machine and installation is just fungible.
I think I've had Linux as a primary OS 2 times, FreeBSD once and osX once, what's pulled me back has been software and fiddling.
I'm on the verge of giving Linux or osX another shot though, some friends has claimed that fiddling is virtually gone on Linux these days and Wine also seems more than capable now to handle the software that bought me back.
But also, much of the software is available outside of Windows today.
Gamers tend to be somewhere in the middle though.
With 9front you OFC need expertise on par of NT but without far less efforth. The books (9intro), the papers, CSP for concurrency... it's all there, there's no magic, you don't need ollyDBG or an NT object explorer to understand OLE and COM for instance.
RE 9front? Maybe on issues while debugging, because the rest it's at /sys/src, and if something happens you just point Acid under Acme to go straight to the offending source line. The man pages cover everything. Drivers are 200x smaller and more understandable than both NT and Unix. Meanwhile to do that under NT you must almost be able to design an ISA by yourself and some trivial compiler/interpreter/OS for it, because there's no open code for anything. And no, Wine is not a reference, but a reimplementation.
OpenVZ and Linux vserver are older than LXC and were commonly used, though they required a patched kernel.
But Virtuozzo was hampered by being non-free - OpenVZ wasn't open sourced until 2-3 years later, by which time the damage had been done (but, of course, Xen headed in the opposite direction at roughly the same time!)
And Linux-VServer was held back by being focussed so directly at virtual hosting providers - it positioned itself against fcgi-suexec, fcgid, and php-fpm (and was much more unwieldy than any of them) rather than jails or VZ.
Both were more or less ignored until the late 2000s, by which time LXC had taken a lot of mindshare - allowing the "FreeBSD was years ahead with jails" meme to take root.
For all my computing career, I'd use Unix-alike because they let me develop software by having an idea, write some code, let it run in the terminal, chain things together, and see where it failed, and iterate. Terminals let this happen much faster than clicky GUI software, and contain a log of what was happening in a terminal pane (or gnus screen or tmux pane, because usually this is happening on big servers and compute clusters rather than my terminal/laptop)
I could launch a bunch of panes, have several lines of investigation going, and come back to it a day or week later when I had time because the terminal kept a log of everything that happened.
And a couple years ago I started noticing that things I thought I had launched would start disappearing. At first I thought I had started accidentally mispressing a key and killing a tmux pane rather than disconnecting.
But no! When I finally went back to a in-person Linux workstation and saw it happen to the entire terminal window, I knew something major had changed. It turns out that something called systemd-oomd was added to a bunch of distributions that now kills only entire cgroups of processes at a time, rather than a single process offender.
So now if you want to run processes and isolate the kill zone of a process, you have to wrap every freaking subprocess in an entire systemd-run wrapper or docker wrapper. And systemd-run won't work from many contexts, such as inside a Jupyter kernel.
Major breaking changes on fundamental system behavior are a huge problem these days. It's one thing to let the OS kill processes more when there's a memory issue, fine, great, go ahead. But why kill all the lightweight processes that could give feedback to a user?! And why force non-portable process launching semantics, that aren't even consistent across the entire system?!? So infuriating.