EDIT: As usual in some Linux forums, I got a bunch of marketing replies about all the cool ideas NixOS is supposed to bring as if I haven't read about them myself in LWN and other Linux sources I routinely check on.
That's because "I think X sucks/is useless" isn't really a very useful comment in terms of the HN guidelines unless you're inviting discussion about the merits and drawbacks of the solution.
---
The other day, I upgraded my Ubuntu and it broke my display manager for some reason. I spent several hours debugging it. Given that investing in a niche technology where you won't get as much support as for e.g. Ubuntu (or even mac/win) has its own set of drawbacks, that isn't enough to make me switch to NixOS (problems like this really only ever happen every couple of years), but I do think that this isn't something that should normally happen. With NixOS, it would have been easy to rollback.
At work, I'm maintaining a bunch of custom configured Linux images for our products. While building those images through Dockerfiles is pretty declarative, my pain starts with keeping the configuration of systems in the field up to date: half of the build instructions is duplicated in Ansible playbooks.
Conceptually, with Nix(OS), I could maintain a single, declarative configuration that I would use to create new images and simply push to existing machines an re-configure them.
But I don't see getting something like that production-ready in the near future. I'm currently thinking of only building base images with docker and doing all product-specific configuration through ansible.
There is nothing declarative about Dockerfiles, they are a list of imperative steps. Same for ansible playbooks, which are also often called declarative when they really aren't.
But on the abstraction level of configuring a Linux system this imperative nature can be completely hidden, as demonstrated by NixOS. E.g. you can set `services.uptime-kuma.enable = true` in a NixOS configuration and it will fetch the package, setup a systemd service for it, setup a service user, and enable and start the service, without you having to care about any kind of ordering or the specific steps themselves. You can do the same for other services, and it doesn't matter in which order they are declared.
Of course someone has to first build the abstraction, which is service-specific most of the time. Though, NixOS provides lower-level options to define these service abstractions declaratively as well (think: declaring that a user account is present, or that a systemd service is setup with some configuration), so most of the time you still don't reach the imperative base underlying NixOS.
I don't think that is the case. In five years of using NixOS I never needed to reach for any of them, and I don't even know what would be available to me. Probably something about activation scripts? Again, I can't think of anything you might want to do with regard to system configuration that can't be done declaratively on NixOS.
I've also introduced some light NixOS usage at work (3 hosts, one is an uptime-kuma instance, two are Forgejo Actions runners). For that I had to get some proprietary scanner software to run on it, which I could by just putting the extracted deb package in an emulated FHS environment and setting up a service for it, all declaratively.
Even for interfacing with legacy systems and unusual stacks I don't think you will need the escape hatch. Anything that is buildable on and above the abstraction-level of "ensure a file is present at some path with some content" should be doable declaratively, and that includes setting up an unusual software stack and running it in systemd services to communicate with some other legacy system or whatever.
The escape hatch is there to modify how NixOS itself behaves, and modifying that should only be necessary to extend NixOS' core functionality. A quick search revealed that impermanence (https://github.com/nix-community/impermanence) and in some cases sops-nix (https://github.com/Mic92/sops-nix) use it, but those fundamentally extend NixOS with ephemeral root storage support and secrets management, respectively.
99.5% percent of people don't use Docker because they have a passion for writing dockerfiles, or because they find the ideas behind Docker elegant. They use it as a tool to help build other things. Meanwhile, the Nix community keeps pushing out these ideas about how Nix _works_ ("reproducible builds!", "lazy evaluation!") but they don't seem to particularly care about making Nix easy to use for the majority of the population out there who may want to use Nix as tool and don't care about its technical merits.
On top, I think most projects would benefit from having more functional programming advocates in them. Nix suffers from the opposite problem: it would benefit from having _less_ passionate FP people in it. Nix the language is pretty inscrutable for lots of people.
If your threshold is based on the inscrutable language, I'm not sure that can be really solved by anyone in nix community at this point. You're just looking for a different solution, and that's fine. But nixos still solves real, practical issues for many people.
I'm just saying that, if the goal is mainstream adoption, the core team should acknowledge that people will use Nix as a means to an end, and not for the sake of using it as they do.
You're creating your own ideas of why people do things and presenting them as facts. Please don't, it's neither useful in the discussion nor nice.
Did it ever happen? Last I've checked flakes were still experimental.
So TL;DR -- because most programmers can't describe a use-case if their lives depended on it.
> it's so easy to modify/remove/overwrite something
For some reason, out of the corner of my eye, I thought you were describing this as a feature of Nix because I missed the "mess up your install" part. It's easy to modify/remove/overwrite a package in Nix in a non-destructive way though. Of the few packages that have been broken for me, I feel empowered to pull them down and modify them to add needed functionality. And as I've gotten more confident, doing overlays or whatever they're called instead.
on nixos anything that you do has to be explicit in your config and thus it can be undone as needed.
Usually "deterministic" is used instead to describe this.
In general it's good for declarative management of systems and projects that can be deployed in a repeatable (even if not 100% reproducible) way.
> A build is reproducible if given the same source code, build environment and build instructions
It is very strange definition. To me it is tautological, as computers are deterministic and if you have same build environment you will get same results by necessity.Same bits into input - some bits out of output.
Edit: formatting.
It can be simple things (something includes timestamps in binaries, which can be worked around by pinning the time), and complex things (concurrent linking of binaries is done FIFO-style, and different compilation units finish at different times in different runs because of varying processor load).
assuming that "build environment" includes more things than the author does (*it's* *not* *exactly* *a* *well-defined* *term*)
It is exactly my point. What is "build environment"? This definition is crucial for defining "reproducible builds" and we cannot handwave it away.There is a lot of randomness in a typical build system: randomly ordered lists, race conditions in multithreaded coded, timestamps in the filesystem, etc, The worse kind of build systems can even include fetching files the internet. So, no, for all practical purposes actual computers are not deterministic and this is not tautological at all.
You can try to emulate determinism using some kind of virtual machine and run your build there (for example, see Hermit[1]), but it will never be perfect determinism and there are many practical downsides.
My point is: without very precise definition of "same build environment" all discussion of reproducible builds are almost meaningless.
I see people working on reproducible builds understanding exactly what they work on and having successful conference talks. I understand their goal too. It could be less vague in the explanation on the website, but... There's a common understanding you seem to be missing, but that doesn't make the discussions/work meaningless.
For example listing files in a directory and adding them to a list for further processing, without explicitly sorting by some stable criteria like filename.
I have some bad news.
I don't know what the Nix designer (NixOS) had in mind when they thought that putting layer upon layer of complexity was a great solution. If something goes wrong in NixOS at the bottom layer, the nixos-rebuild command will produce weird errors. NixOS is an effort to make Linux complex and, in my opinion, useless. I could go on listing the shortcomings of NixOS, but I'll stop here...
nixpkgs is also the largest and most up to date package set (followed by arch) so there's clearly something in the technology that allows a loosely organised group of people to scale to that level.
One of the main issues with nixpkgs is that users have to rely on overlays for a package. This can lead to obscure errors because if something fails in the original package or a Nix module, it's hard to pinpoint the problem. Additionally, the overuse of links in the directory hierarchy further complicates things, giving the impression that NixOS is a patched and poorly designed structure.
As someone who has tried Nix, uses NixOS, and created my own modular configuration, I made optimizations and wrote some modules to scratch my own itch. I realized I was wasting time trying to make one tool configure other tools. That’s essentially what NixOS does through Nix. Why complicate a Linux system when I can just write bash scripts and automate my tasks without hassle? Sure, they might say it’s reproducible, but it really isn’t. Several packages in NixOS can fail because a developer redefined a variable; this then affects another part of the module and misconfigures the upstream package. So, you end up struggling with something that should be simple and straightforward to diagnose.
Maybe something like this? All packages are defined as a tuple (dependencies, dev dependencies, base image, command list, file list). Builds are done by loading the base image into a virtual machine, copying the dev dependencies into the vm, running the commands, and then pulling the listed files out into the host. All packages of a given OS edition would use the same base image, I suppose.
If a user wants to configure a package it will be done by putting a patch file in his configuration directory. The patch is applied to the built files.
Some simple example I found just now:
https://unix.stackexchange.com/questions/489509/how-do-i-mod...
On config files, Guile is not that difficult.
>Tens or hundreds
You never wrote a single line or Scheme or Common Lisp and it shows.
No one writes thousands of nested lines down. They modularize it into functions because Lisp was basically made literally for that.
Get some Emacs modules and code, such as Mastodon.el. Look at the source code. Far easier to understand than any C or C++ codebase, not to mention Rust.
I remind you this is HN, guess on what was built on.
>non-free software
Non-free software should be the issue, not Guix itself.
Not free software by default is NOT reproducible. Period. Useless for science.
Changing a licence on some software doesn't change the compilation. The licence is completely orthogonal to whether software is reproducible or not.
echo foobar
Reproducible, compiled and free are all orthogonal.Not the parent, but this approach from Guix maintainers is the reason why it's not an OS I would consider. I use free software almost exclusively, but I appreciate that nixos doesn't try to play the moral police and will let me easily run any non-free software I want. I don't like it, but sometimes using a non free tool is the easiest way to get the job done, and life is too short to get stuck on software idealism (at least for me).
Oh you've tried Haskell then?
Redeploying a machine has never been easier for me as all of my config is stored in a git repo in a few .nix files. I don't need to remember what I installed or configured somewhere in /etc five years ago, I can just look in the .nix file for that machine. Everything I configure or install is synced across my desktop and laptop through imports (unless specified), which makes the experience very seamless.
When something goes wrong e.g. after an upgrade, I just reboot and choose the previous generation from the boot menu. If I just want to try out a package, I can get a temporary shell where that package is available. If I don't need it after all, it just gets garbage collected.
When working on a project, I can use a dev shell where everything I need to build/test that project is isolated so I don't pollute my root namespace.
I get a new computer once every few years and getting my usual programs installed is no more than a few hours work at most. I have 2 computers, one for work and one for private. Syncing installed programs is neither necessary nor particularly wanted; I don't want Steam on my laptop, nor do I want to run postgres on my private system. The only time an upgrade breaks stuff these days is when a kernel update needs the drivers to be recompiled. The command for that is stored in a file on my desktop and takes less than a minute to find, type in and wait until everything is fine again.
Regarding unwanted packages and "namespace pollution", every once in a while I install a program that is used once or twice and then forgotten about. This doesn't impede my normal workflow at all.
So the benefits of NixOS would be minimal for me, while the onboarding materials and documentation are pretty atrocious. The effort/reward ratio is just not there.