I think it's fair to say that OS/2 had better Windows compatibility (for it's era) than Wine offers (in this era). The problem was that Microsoft introduced breaking changes with the introduction of Windows 95. While old Windows applications would continue to run under OS/2, IBM felt that it would take too much effort to introduce a compatability layer for Windows 95. If I recall correctly, it involved limitations with how OS/2 handled memory.
Besides, binary compatibity has never really been a big thing in Linux since the majority of software used is open source. It is expected to compile and link against newer libraries, but there is no real incentive for existing binaries to remain compatible. And if the software doesn't compile against newer versions of libraries, well, Windows has similar issues.
The latest multi-platform packaging systems like Nix or Flatpak have largely solved the binary compatibility problem but providing some guarantees of library versions. This approach makes more sense in modern contexts with cheap storage and fast bandwidth.
- Steam Runtime: A set of common libraries Linux native games target for multi-distro compatibility, I believe this still uses Ubuntu as the upstream
- Steam OS: An arch based distro pre-configured to run Steam out of the box, used by the Steam Deck, comes with extra stuff like gamescope to smooth over various issues other distros have with VRR, HDR, etc.
- Proton: Runs Windows games on Linux
Yes, you'd have to buff and polish it. But "paint some chrome on it" is hardly much of a blog post.
[1] Actually, are you sure the answer is "no" here? I wouldn't be at all shocked if some enterprising geek had source on github implementing a MSI extractor and installer
It can also be achieved with static linking and by shipping all needed library and using a shell script loader that sets LD_LIBRARY_PATH.
Also glibc (contrary to the author's false claims) and properly designed libraries are backwards compatible, so in principle just adding the debs/rpms from an older Debian/Fedora that ships the needed libraries to the packaging repositories and running apt/dnf should work in theory, although unfortunately might not in practice due to the general incompetence of programmers and distribution maintainers.
Win32 is obviously not appropriate for GNU/Linux applications, and you also have the same dependency problem here, with the same solution (ship a whole Wine prefix, or maybe ship a bunch of DLLs).
That doesn’t work for GUI programs which use a hardware 3D GPU. Linux doesn’t have a universally available GPU API: some systems have GL, some have GLES, some have Vulkan, all 3 come in multiple versions of limited compatibility, and optional features many of them are vendor specific.
In contrast, it’s impossible to run modern Windows without working Direct3D 11.0 because dwm.exe desktop compositor requires it. If a software consumes Direct3D 11.0 and doesn’t require any optional features (for example, FP64 math support in shaders is an optional feature, but sticking to the required set of features is not very limiting in practice unless you need to support very old GPUs which don’t implement feature level 11.0), will run on any modern Windows. Surprisingly, it will also run on Linux systems which support Wine: without Vulkan-capable GPU will be slow but should still work due to Lavapipe, which is a Linux equivalent of microsoft’s WARP they use on Windows computers without hardware 3D GPU.
Sure, you can run a 20 year old app, but that is not the same as a current app still working in 20 years, or even 5.
Not sure I follow. Sure, most modern programs are not using old-school WinAPI with GDI, but the stuff they added later is also rather stable. For example, the Chromium-based browser I’m looking at uses Direct3D 11 for graphics. It implements a few abstraction layers on top (ANGLE, Skia) but these are parts of the browser not the OS.
I view all that modern stuff like Direct3D, Direct2D, DirectWrite, Media Foundation as simply newer parts of the WinAPI. Pretty sure Microsoft will continue to support them for long time. For example, they can’t even deprecate the 23 years old DirectX 9 because still widely used, e.g. current version of Microsoft’s own WPF GUI framework relies on Direct3D 9 for graphics.
On Windows, new layers are applied over the old. There is DirectX 9-12. New binaries may use 12 but the ones still using 9 are perfectly happy. Things like .NET work the same. You can have multiple apps installed relying on different .NET versions.
You can also run an old mesa from the time the app was built if it supports your newer hardware, but I'd rather consider that to be part of the platform the same way you'd consider the DirectX libraries to be part of windows.
An example from another comment: https://news.ycombinator.com/item?id=43519949
> I have flatpaks from several years ago that no longer work (Krita) due to some GL issues.
That’s an example of Linux GPU APIs being unstable in practice, and container images not helping to fix that.
But I suspect "GL issues" (i.e., GL API stability) is being mixed together with e.g. mesa issues if mesa is being bundled inside the app/in a "flatpak SDK" instead of being treated as a system library akin to what you would do with DirectX.
Mesa contains your graphics driver and window system integrations, so when the system changes so must mesa change - but the ABI exposed to clients does not change, other than new features being added.
I'm running Loki game binaries just fine today btw.
Microsoft also provides quite good stability for DirectX and other extension APIs. You can still run old .Net apps without issues as long as they didn't pull a Hyrum's Law on you and depended on apparent behavior.
OpenGL and Vulkan ABIs are also stable on Linux, provided by mesa. The post is pretty focused on the simplicity of win32 though, which is what I'm refuting as being as relevant today for new apps.
> As long as they didn't pull a Hyrum's Law on you
It is guaranteed that they "pull a Hyrum's Law", the question is just what apparent behavior they relied on.
It's true, but this touches on another point they made: what apps code to is other dynamically linked libraries. The kind that wine (or other host environments) can provide, without needing to mess with the kernel.
Then there’re quality issues. If you search internets for “Windows Vulkan issue” you’ll find many end users with crashing games, game developers with crashing game engines https://github.com/godotengine/godot/issues/100807 recommendations to update drivers or disable some Vulkan layers in registry, etc.
On Windows, Vulkan is simply not as reliable as D3D. The reasons include market share, D3D being a requirement to render the desktop, D3D runtime being a part of the OS supported by Microsoft (Vulkan relies solely on GPU vendors), and D3D being older (first version of VK spec released in 2016, D3D11 is from 2009).
Another thing, on Linux, the situation with Vulkan support is less than ideal for mobile and embedded systems. Some embedded ARM SoCs only support GLES 3.1 (which BTW is not too far from D3D 11.0 feature-wise) but not Vulkan.
Unless you are running Windows, in which case it doesn’t. Intel simply has not made a driver.
Yep exactly. While Vulkan API is well defined and mostly stable, there is no guarantee in Linux implementation will also be stable. Moreover Khronos graphics APIs only deal with the stuff after you allocated a buffer and did all the handshakes with the OS and GPU drivers. On Linux none of those have API / ABI / runtime configuration stability guarantees. Basically it works until only one of the libraries in the chain breaks the compatibility.
All Linuxes I'm familiar with run Mesa, which gives you OpenGL and Vulkan.
Now GUI toolkits are more of an issue. That's annoying for some programs, many others do their own thing anyway.
True, but sad. The way to achieve compatibility on Linux is to distribute applications in the form of what are essentially tarballs of entire Linux systems. This is the "fuck it" solution.
Of course I suppose it's not unusual for Windows stuff to be statically linked or to ship every DLL with the installer "just in case." This is also a "fuck it" solution.
No so bad when Linux ran from a floppy with 2Mb of RAM. Sadly every library just got bigger and bigger without any practical way to generate a lighter application specific version.
You can still have very tiny Linux with a relatively modern kernel on tiny m0 cores, and there's ELKS for 16-bit cores.
It is not a packaging problem. It is a system design problem. Linux ecosystem simply isn't nice for binary distribution except the kernel, mostly.
Mac OS has solved this but that is obviously a single vendor. FreeBSD has decent backwards compatibility (through the -compat packages), but that is also a single vendor.
That's also why I have the opinion that the world is worse off due to the fact that Linux "won" Unix wars.
They do it by having standards in the OS, partial containerization, and above all: applications are not installed “on” the OS. They are self contained. They are also jailed and interact via APIs that grant them permissions or allow them to do things by proxy. This doesn’t just help with security but also with modularity. There is no such thing as an “installer” really.
The idea of an app being installed at a bunch of locations across a system is something that really must die. It’s a legacy holdover from old PC and/or special snowflake Unix server days when there were just not many machines in the world and every one had its loving admin. Things were also less complex back then. It was easy for an admin or PC owner to stroll around the filesystem and see everything. Now even my Mac laptop has thousands of processes and a gigantic filesystem larger than a huge UNIX server in the 90s.
Got it. So everything is properly designed but somehow there's a lot of general incompetence preventing it from working. I'm pretty sure the principle of engineering design is to make things work in the face of incompetence by others.
And while glibc is backward compatible & that generally does work, glibc is NOT forward compatible which is a huge problem - it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on. Whereas on Mac & Windows it's pretty easy to build applications on my up-to-date system targeting older variants.
But it is working, actually:
* If you update your distro with binaries from apt, yum, zypper etc. - they work.
* If you download statically-linked binaries - they work.
* If you download Snaps/Flatpak, they work.
> it means that you have to build on the oldest bistro you can find so that the built binaries actually work on arbitrary machines you try to run it on.
Only if you want to distribute a dynamically-linked binary without its dependencies. And even then - you have to build with a toolchain for that distro, not with that distro itself.
Even statically linked code tends to be dynamically linked against glibc. You’ve basically said “it works but only if you use the package manager in your OS”. In other words, it’s broken and hostile for commercial 3p binary distribution which explains the state of commercial 3p binary ecosystem on Linux (there’s more to it than just that, but being actively hostile to making it easy to distribute software to your platform is a compounding factor).
I really dislike snaps/flat pack as they’re distro specific and overkill if I’m statically linking and my only dynamic dependency is glibc.
If you build a statically linked program with only glibc dynamically linked, and you do that on Linux from 2005,then that program should run exactly the same today on Linux. The same is true for Windows software.
Linux is the only space where you have to literally do your build on an old snapshot of a distro with an old glibc so that you can distribute said software. If you’re in c++ land you’re in for a world of hurt because the version of the language is now constrained to whatever was available at the time that old distro from 5+ years ago snapshotted unless you build a newer compiler yourself from scratch. With Rust at least this is much easier since they build their toolchain on an old version of Linux and thus their binaries are similarly easily distributed and the latest Rust compiler is trivially easy to obtain on old Linux distros.
Source: I’m literally doing this today for my day job
As for being overkill, surely you can see the advantage of having a single uniform distribution format from the end user's perspective? Which, sure, might be overkill for your case (although app isolation isn't just about dependencies), but the important thing is that it is a working solution that you can use, and users only need to know how to install and manage them.
Only if you're using a distro that doesn't come with it preinstalled. But that doesn't make it distro-specific?
> And now I have to host a flat pack repo and get the user to add my repo if it’s proprietary software.
You don't have to do that, you can just give them a .flatpak file to install: https://docs.flatpak.org/en/latest/single-file-bundles.html
The reason to host a repo regardless is to enable easy auto-updates - and I don't think you can call this bit "smooth and simple" on Windows and Mac, what with most apps each doing their own thing for updates. Unless you use the app store, but then that's exactly the same as repos...
Still, to reliably target older Windows versions you need to tell your toolchain what to target. The Windows SDK also lets you specify the Windows version you want to target via WINVER / _WIN32_WINNT macros which make it harder to accidentally use unsupported functions. Similarly, the compilers and linkers for Windows have options to specify the minimum Windows version recorded in the final binary and which libraries to link against (classic win32 dlls or ucrt). Unfortunately there is no such mechanism to specify target version for glibc/gcc and you have you either build against older glibc versions or rely on third-party headers. Both solutions are workable and allow you to create binaries with a wide range of glibc version compatibility but they are not as ideal as direct support in the toolchain would be.
Isn’t this easily solved by building in a container? Something a lot of people do anyway - I do it all the time because it insulates the build from changes in the underlying build agents - if the CI team decides to upgrade the build agent OS to a new release next month or migrate them to a different distro, building in a container (mostly) isolates my build job from that change, doing it directly on the agents exposes it to them
You can run non games on Proton. Most things work.
[1] https://ounapuu.ee/posts/2025/02/05/done-with-ubuntu/ [2] https://getaurora.dev/en
Why, oh why, I have to deal with exe files that are not even 5 years old and don't work on my windows laptop after update... I wish I lived in Author's universe...
> In Windows, you do not make system calls directly, Instead, you dynamically link to libraries that make the system calls for you.
Isn't the actual problem the glibc shared library since the Linux syscall interface is stable? (as promised by "don't break user space") - e.g. I would expect that I can take a 20 years old Linux binary which only does syscalls and run that on a modern Linux, is that assumption wrong?
ABI stability for Windows system DLLs is also only one aspect, historically Microsoft has put a ton of effort into preserving backward compatibility for popular applications even if they depend on bugs in Windows that had been fixed in later Windows versions.
I expect that Windows is full of application specific hacks under the hood to make specific old applications work.
E.g. just using WINE as the desktop Linux API won't be enough, you'll also have to extend the "don't break user space" promise from the kernel to the desktop runtime environment, even if it means "bug-by-bug-compatibility" with older versions.
I tend to just compile on a really old distro to work around this. Tbf you need to do the same thing on Mac, it just isn't so much of an issue because Macs are easier to update.
The other side of the problem is that the whole Linux ecosystem is actively hostile to bundling dependencies and binary distribution in general, so it's not a surprise that it sucks so much.
Yes
> I would expect that I can take a 20 years old Linux binary which only does syscalls and run that on a modern Linux, is that assumption wrong?
You’re right. But those apps are simple enough that we could probably compile them quicker than they actually run.
> I expect that Windows is full of application specific hacks under the hood to make specific old applications work.
Yes [0]!
> just using WINE as the desktop Linux API won't be enough, you'll also have to extend the "don't break user space" promise from the kernel to the desktop runtime environment
Yes, but. Windows is the user space and kernel for the most part. So the windows back compat extends to both the desktop runtime and the kernel.
You might argue it’s a false equivalence, and you’re technically correct. But that doesn’t change the fact that my application doesn’t work on Linux but it does on windows.
Just wanted to point out that ABI stability alone probably isn't the reason why Windows is so backward compatible, there's most likely a lot of 'boring' QA and maintenance work going on under the hood to make it work.
Also FWIW some of the early D3D9 games I worked on no longer run on out of the box on Windows (mostly because of problems related to switching into fullscreen), I guess those games were not popular enough to justify a backward compatibility workaround in modern Windows versions ;)
Windows gives (in practice) DE, user space, and kernel stability, and various Linux distributions don’t. If you care about changing the Linux ecosystem to provide that stability it matters, but if you want to run an old application it doesn’t.