On Debian you're one package away:
sudo apt install wine-binfmt
Otherwise you're still pretty close: echo 'none /proc/sys/fs/binfmt_misc binfmt_misc defaults 0 0' >> /etc/fstab
mount -a
echo ':DOSWin:M::MZ::/usr/bin/wine:' > /proc/sys/fs/binfmt_misc/register
It's not just to prevent applications to read other applications files, but also to firewall each application individually
For example, if you don't want the application you've mapped to user id 1001 to have any networking, use iptables with '-m owner --uid-owner 1001 -j DROP'
I moved from Windows to Linux a few years ago, I have a few Windows apps I still love a lot (mostly Word and Excel) and thanks to wine I will always be able to use them.
They are also extremely fast: cold starting Word (or Excel) on my laptop takes less than a second, and use far less RAM
Personally, I'd rather purchase a few shrink wrapped old versions of Office from ebay than bother with LibreOffice, Abiword or the online version of Office.
EDIT: I can't find the old recording I made showing how fast it was, but here's what it looks like on my hypland desktop: you can see in btop it doesn't take much resources https://www.reddit.com/r/unixporn/comments/11w3zzj/hyprland_...
In case you're not aware, wine prefixes each use their own settings, but are not isolated from one another.
https://gitlab.winehq.org/wine/wine/-/wikis/FAQ#how-good-is-...
> but I think the better way is to do it like android: one "user" per application.
This would help somewhat, assuming you don't run them all in one user's X session. On Linux, some desktop environments have a "switch user" action to start a separate desktop session running as another user on another virtual console. You can switch between them with Control+Alt+F2, etc.
That's a great point!
I'm aware, which is why recommend instead that wine apps should each be run under a different userid: I don't want any given app to have access to anything that it doesn't absolutely need
> This would help somewhat, assuming you don't run them all in one user's X session
When I start a given wine app, the script starting it allows this user id to render on my Xwayland
It is not as secure as running each on its own X session, but wayland compositors can offer more isolation as needed.
Why would one want to prevent applications from reading other applications' files?
We're talking about running desktop applications designed for an OS that isn't built around any concept of application isolation, and for which using a common filesystem is a primary mechanism of interoperability.
Because I can, and because I don't trust Windows application to be secure.
Thanks to that, I have no problem running 15 year old office software: even if I knew it was malicious, I also know there's nothing it can do without network access, without file access, and with resources constrains (so it can't even cause a denial of service, except to itself).
In the worst case, I guess it could try to erase its own files? (but it would then be restored from an image on the next run, and I would keep going)
Great. Except... WTF can you do with an office application that can't read or write files?
I thought it was impossible to run newer versions of Office on Linux.
Myself I often prefer LibreOffice, but more options are more options!
I will try Office 2013 (I'd like a version that works well in wine64!)
WineHQ reports that installer for 2013 64bit is "gold", but apps required few tweaks to be applied and Access sometimes failed.
Generally seems 2013-2016 era works on wine per few applications I checked
Winetricks[1] can be used to acquire and install a set of default fonts directly from Microsoft.
Furthermore, Windows font fallback differs substantially from that of Linux and similar systems, which generally utilize Fontconfig and FreeType with font relationships defined in configuration files. In contrast, Windows (and consequently Wine) employs a font linking mechanism[2]. Windows handles font linking natively, whereas Wine requires manual registry configuration[3].
[1] https://github.com/Winetricks/winetricks
[2] https://learn.microsoft.com/en-us/globalization/fonts-layout...
[3] https://stackoverflow.com/questions/29028964/font-recognitio...
Doesn't wine delegate rendering to FreeType? Might as well delegate font fallback to FreeType.
It's just you. I set up the DPI and high res option to run old Office apps, and they have very nice fonts both on my 2k laptop 4k screen.
Try `xprop -root -f _XWAYLAND_GLOBAL_OUTPUT_SCALE 32c -set _XWAYLAND_GLOBAL_OUTPUT_SCALE 2`
The simplest solution, to me, is to just distribute containers (or some other sandbox) with wine in it, and the necessary shenanigans to get the windows program (just the one) working in the container, and just distribute that. Everyone gets the same artifact, and it always works. No more dicking around with wine settings, because it's baked in for whatever the software is.
Yes, this is tremendously space inefficient, so the next step would be a way of slimming wine down for container usage.
The only real barrier to this system is licensing and software anti patterns. You might have to do some dark magic to install the software in the container in the first place.
https://support.codeweavers.com/en_US/2-getting-started/2-in...
I believe it started with Cedega, but I could be wrong. That's where I first recall encountering it.
A few years ago I purchased a few shrink-wrapped Office on ebay for each of the versions Wine claimed to support best, tested then with wine32 and wine64, and concluded the "sweet spot" was Office 2010 in wine32 (it may have changed, as wine keep evolving)
Yes, it's 15 years old software, but it works flawlessly with Unicode xkb symbols! Since it doesn't have any network access, and each app is isolated in a different user id, I don't think it can cause any problem.
And Ii I can still use vim to do everything I need and take advantage of how it will not surprise me with any unwanted changes, I don't see why I couldn't use say an old version of Excel in the same way!
Maybe not much?
A few months ago, I ran out of power (my mistake, I use full screen apps to avoid the distraction, so I didn't realize I was unplugged)
After plugging in and restarting Linux then the ancient version of Word I was using, I got a pleasant surprise: the "autosaved" version of the document I was editing, with nothing lost!
As for llm, Excel 2010 may not have been made for AI, but wine copy/paste and a few scripts work surprisingly well!
Word is similar.
Features from later versions will either not show up or show up as boxes
Back then, Word's auto-save updated the file that you were working on rather than creating a separate backup file. I liked that better, though there might have been a good reason for changing approaches in later versions of Word.
I always liked this idea; but wouldn't you run into issues with file permissions? And if not, wouldn't that mean that the program in question would have access to all your files anyhow, removing the benefit of isolation?
I use scripts to automate everything - including allowing wine to use Xwayland (because until I start the application I want, its userid is not allowed to show content on my display)
If you want to try using wine with different user ids, try to start with a directory in /tmp like /tmp/wine which is group writable, with your windows app and your user belonging to the same group.
Half of this incompatibility is because Linux is flexible, anyway. My system is different from your system, and did anyone test on both? If you want a more stable ABI then you need a more stable system.
Rather than everyone having to get the software working on their machine, you would get it working once in the container, and then just distribute that.
> Windows application submissions that are using Wine or any submissions that aren't native to Linux desktop and is using some emulation or translation layer will only be accepted if they are submitted officially by upstream with the intention of maintaining it in official capacity.
Although, I am curious why; they don't seem to have a general problem with unofficial packages, so I'm not sure why a translation layer makes any difference. It doesn't seem different than happening to use any other runtime (ex. nothing is said about Java or .net).
It would take a large organization with enough connections to cut through this. You'd probably need to cut a deal so you could distribute their software, and you'd need to provide a mechanism for users to be able to make purchases. Even then, there are various licensing challenges, because you would be distributing the same install, so thousands (or millions) of "installs" would effectively have the same serial or license number.
It's nontrivial, but the basic idea is straightforward and doable. The challenge is how windows software is distributed and licensed, not anything technical.
I think this could work as a translation layer because containers already abstract away everything on syscall level.
There must be just a GUI for that, which can create multiple sandboxes easily, per application, and remember what you configured and installed there (and add it to the Winefile).
In regards to the sharing serials problem: You can easily diff the .reg file that wine adds there and if anything pops out in \\software, you can assume this is a custom field and offer it as an environment variable for the container?
[1] https://usebottles.com [2] https://github.com/bottlesdevs/programs/blob/main/Games%2Fit...
[0] https://github.com/notepad-plus-plus/notepad-plus-plus/blob/... [1] https://appdb.winehq.org/objectManager.php?sClass=applicatio...
It doesn't have to be. Old software is cheap, even shrink wrapped ("new old stock")
The key difference is that applications ran under wine will always have some subtle quirks and misbehaviors that break the usability, that can be worked around by the dev when given the chance to.
I think the idea is to provide a seamless Windows like experience so the user works exact how he expects to work under Windows. Without having to fiddle, modify configurations, wrestle with different settings. Just click and run.
An end user wouldn't need to modify configs or wrestle settings, because that's already done for you upstream. You just get an artifact that you can click and run. From the other posts, Proton and Steam Deck already do something similar, and this is also conceptually similar to the way AppImages, Flatpaks etc. work.
> but the hacks used to make one app work can break others and vice versa
I think a lot of these problems could be avoided with a singular OS with the sole goal to support windows exes.
There's zero reason you couldn't create a small abstraction layer around docker so you can install "executables" that are really just launching within a container. I mean, isn't that the whole idea behind flatpak, snaps, and appimages?
The point is to leverage modern abstraction techniques so that people don't need to learn a new system.
It could even support running as a self contained application on Windows, with all needed DLLs provided.
IMHO, you just compare two different things. Traditional method of installing apps on Windows is packing all dynamic dependencies with it. While on linux dynamic dependencies are shared between apps. So, there is nothing surprising that when you change the dependencies of the app, it stops working.
There are few ways to solve this and your are free to choose:
- distribute the same as on Windows
- link statically
I was involved in replacing Windows systems with Linux + Wine, because (mission-critical industrial) legacy software stopped working. No amount of tweaking could get it to work on modern Windows system. With Wine without a hitch, once all the required DLL files were tracked down.
While Wine may indeed be quite stable and a good solution for running legacy Windows software. I think that any dynamically linked legacy software can cause issues, both on Windows and Linux. Kernel changes may be a problem too. While Windows is often claimed to be backwards compatible, in practice your mileage may vary. Apparently, as my client found out the hard/expensive way.
I moved from Windows 11 to Linux for the same reason: I was using an old version of Office because it was faster than the included apps: the full Word started faster than Wordpad (it was even on par with Notepad!) The Outlook from an old Office used less ram and was more responsive than the one included with Windows!
When I got a new laptop, I had problems with the installation of each the old versions of Office I had around, and there were rumors old versions Office would be blocked.
I didn't want to take the risk, so I started my migration.
> While Windows is often claimed to be backwards compatible, in practice your mileage may vary
It was perfectly backwards compatible: Windows was working fine with very old versions of everything until some versions of Windows 11 started playing tricks (even with a Pro license)
I really loved Windows (and AutoHotKey and many other things), but now I'm happy with Linux.
oh, do you know - how can I configure e.g. Win+1, Win+2, etc to switch to related virtual desktops? And - how to disable this slow animation.. just switch instantly?
May be you have several ideas where I should search. I'm use Linux as my OS for a long time, but now I need to use Windows at my job. So, I'm trying to bring my Windows usage experience as close as possible to so familiar and common on Linux.
I see you were given an answer for the slow animation. For most UI tweaks, regedit is a good starting point.
You may also like the powertoys, but I suggest you take the time to create AHK scripts, for example if you want to make your workflow keyboard centric
> So, I'm trying to bring my Windows usage experience as close as possible to so familiar and common on Linux.
I did the opposite with the help of hyprland on arch, but it took me years to get close to how efficient I was on Windows, where there are many very polished tools to do absolutely anything you can think of.
Settings > Accessibility > Visual effects > Animation effects
There's no built-in way to set hotkeys to switch to a specific desktop. And my primary annoyance is that there's no way to set hotkeys to move a given window to a different desktop.
On Windows side nobody bundles Windows GUI libraries, OpenGL drivers or sound libraries. On Linux side, system libs have to be somewhere in the container and you have to hope that it is still compatible.
You cannot link everything statically either. Starting with Glibc, there are many libraries that don't work fully or at all when statically linked.
Unless static linking/relinking is extremely costly, it seems unnecessary to use shared libraries in a top-level docker image (for example), since you have to rebuild the image anyway if anything changes.
Of course if you have a static executable, then you might be able to simplify - or avoid - things like docker images or various kinds of complicated application packaging.
Depends on what you link with and what those applications do, I would also check the end result. Golang on top of a Docker container is the best case, as far as compatibility goes. Docker means you don't need to depend on the base distro. Go skips libc and provides its own network stack. It even parses resolv.conf and runs its own DNS client. At this point if you replace Linux kernel with FreeBSD, you lose almost nothing as function. So it is a terrible comparison for an end-user app.
If you compile all GUI apps statically, you'll end up with a monstorous distro that takes hundreds of gigabytes of disk space. I say that as someone who uses Rust to ship binaries and my team already had to use quite a bit nasty hacks that walk on the ABI incompatibility edge of rustc to reduce binary size. It is doable but would you like to wait for it to run an update hours every single time?
Skipping that hypothetical case, the reality is that for games and other end user applications binary compatibility is an important matter for Linux (or any singular distro even) to be a viable platform where people can distribute closed-source programs confidently. Otherwise it is a ticking time-bomb. It explodes regularly too: https://steamcommunity.com/app/1129310/discussions/0/6041473...
The incentives to create a reliable binary ecosystem on Linux is not there. In fact, I think the Linux ecosystem creates the perfect environment for the opposite:
- The majority economic incentive is coming from server providers and some embedded systems. Both of those cases build everything from source, and/or rely on a limited set of virtualized hardware.
- The cultural incentive is not there since many core system developers believe that binary-only sofware doesn't belong to Linux.
- The technical incentives are not there since a Linux desktop system is composed of independent libraries developed by semi-independent developers that develop software that is compatible with the libraries that are released in the same narrow slice of time.
Nobody makes Qt3 or GTK2 apps anymore, nor they are supported. On Windows side Rufus, Notepad++ etc. are all written on the most basic Win32 functions and they get to access to the latest features of Windows without requiring huge rewrites. It will be cursed but you can still make an app that uses Win32, WPF and WinUI in the same app on Windows, three UI libraries from 3 decades and you don't need to bundle any of them with the app. At most you ask user to install the latest dotnet.
And yet the original Macintosh toolbox was 64 kilobytes. Black and white though, and no themes out of the box.
Even a 1MB GUI library (enough for a full Smalltalk-80, or perhaps a compact modern GUI) would be in the noise for most apps.
I'm not sure about linux syscall ABI stability either, or maybe other things that live in the kernel?
Yes. OpenGL driver is loading dynamically, but.. Are you sure that there are any problems with OpenGL ABI stability? I have never hear about breaking changes in it
Likewise, as the author states, there's nothing intrinsic to Linux that makes it have binary compatibility issues. If this is a problem you face, and you're considering making a distro that runs EXEs by default through an emulation layer, you are probably much better off just using Alpine or one of the many other musl-based distros.
At certain points he talks about syscalls, libc (I'm assuming glibc), PE vs. ELF, and an 'ABI'. Those are all different things, and IIUC all are fairly stable on Linux, what isn't stable is userspace libraries such as GTK and QT. So, what are we talking about?
There's also statements like this, which, I'm not a kernel developer but they sound a little to good to be true:
> A small modification to the "exec" family of system calls to dispatch on executble type would allow any Linux application to fork an exec a Windows application with no effort.
He goes on to talk about Gatekeeper (which you can disable), Recall (which is disabled by default), and signing in with a Microsoft account (which can be easily bypassed, though he linked an article saying they might remove it). He also talks about "scanning your computer for illegal files", I don't know what this is referring to, but the only thing I could find on Google was Apple's iCloud CSAM scanning thing. That's not on your computer, and it got so much backlash that it was cancelled.
There's plenty of stuff to criticize about these companies and their services without being dramatic, and the idea of Linux having more compatibility with Win32 via Wine isn't bad.
> A small modification to the "exec" family of system calls to dispatch on executble type would allow any Linux application to fork an exec a Windows application with no effort.
That isn't "too good to be true", it's so good it is false – no kernel modification is required because the Linux kernel already supports this via binfmt_misc. You just need to configure it. And depending on how you installed Wine, you may find it has even already been configured for you.
After Microsoft sued them and they changed their name, the bubble was burst and when Ubuntu appeared its niche as a beginner distro ebbed away.
I was surprised to hear it was still alive via a Michael MJD video a month or two ago.
People who primarily use Linux often forget that Windows has the exact same problem. In the case of Windows libc is distributed as part of the Visual C++ runtime. Each version of Visual Studio has its own version of the VC++ runtime and the application is expected to redistribute the version of VC++ it needs.
The only thing Windows does better is ensuring that they maintain backwards compatibility in libc until they release a new version of Visual Studio.
My opinion is that they're both great. I really like how clean and well thought out the Windows API's are. Compared to Linux equivalents they're very stable and easier to use. But that doesn't mean there is anything wrong with C stdlib implementation on either OS. But for system API's, Linux is a bit messy, that mess is the result of having so many people have strong opinions, and Linux trying to adhere to the Unix principle of a modular user-space ecosystem.
For example, there is no "Linux graphics api", there is X11 and Wayland and who knows what else, and neither have anything to do with the Linux project. There are many highly opinionated ways to do simple things, and that is how Linux should be. In the same vein, installing apps on Linux is simply querying your package manager, but on Windows there is no "Microsoft package repo" where everyone dumps their apps (although they are trying to fix that in many ways), and that's how Windows should be.
Let Linux be Linux and Windows be Windows. They're both great if you appreciate them for what they are and use the accordingly.
> Let Linux be Linux and Windows be Windows. They're both great if you appreciate them for what they are and use the accordingly.
What if you technically prefer the Windows way, but are worried about Microsoft's behavior related to commercial strategy, lock-down, privacy...?
The author envisions a system that's technically stable as Windows, yet free as Linux.
Reverse-engineer it's undesirable behavior, mitigate it. The real stuff that scares me is hardware-based (secure enclave computing for example) and legal measures it is taking to prevent us from hacking it.
ReactOS exists, as does Wine. Linux is a purely monolithic Kernel, unlike NT which is a hybrid that has the concept of subsystems built into it. Linux would have to have the concept of subsystems and have an NT-interop layer (probably based off of Wine), the advantage over Wine I fail to see.
In the end, where is the demand coming from I ask? Not from Linux devs in my opinion. I suppose a Wine focused distro might please folks like you, but Wine itself has lots of bugs and errors even after all these years. I doubt it is keeping up with all the Windows11 changes even, what the author proposes, in my opinion is not practical, at least not if you are expecting an experience better than ReactOS or Wine. If it is just Win32/winapi interop layer, it might be possible, but devs would need to demand it, otherwise who will use it?
Linux users are the most "set in their way" from my experience, try convincing any Linux dev to stop using gtk/qt and write apps for "this new Windows like api interface to create graphical apps".
but ultimately, there is no harm in trying other than wasted time and resources. I too would like to see an ecosystem that learns and imitates windows in many ways (especially security measures).
I still believe we would be in a better place had BSD was ready for adoption before Linux. Linux is a kernel and a wide family of operating systems assembled from the kernel and different bits and pieces while BSD tried to be a very coherent operating system from the start.
I had like ten of them installed — I think several from the same year! — cause every program usually bundles its own.
I found the exact version of vcredist installer I needed but then that one refused to install because I already had a slightly newer version. So I had to uninstall that first.
As far as I'm aware this problem still exists in Wine, I installed something in Wine yesterday and I had to use winetricks commands to get the vcredist installers from Microsoft's servers. (Also illegally download some fonts, otherwise my installer refused to start...)
Other libraries are the problem, usually. People are generally really good about changing the .so version of a library when the ABI changes in a backwards-incompatible way. Usually distributions ship both versions until everything they ship either has upgraded or been removed. Solutions like appimage can allow you to ship these libraries in your app.
https://blogs.gentoo.org/mgorny/2024/09/28/the-perils-of-tra...
Of course, if you did need to support this case, you don't need to throw the baby out with the bathwater necessarily. You'd just need a _TIME_BITS=32 build of whatever libraries do have time_ts in their ABI, and if that blog post is any indication Gentoo will probably have a solution for that in the future. I like the idea of jamming more backwards-compatibility work into the system dynamic linker. I think we should do more of that.
In any case, this issue is not a case where glibc broke something, it's a case where the ABI had to break. I understand that may seem like nitpicking, but on the other hand, consider what happens in 2038: All of the old binary software that relies on time_t being 32-bit will stop working properly even if you do have 32-bit time_t shims, at which point you'll need dirtier and likely less effective hacks if you want to be able to keep said software functioning.
Someone comes along and builds their software on the latest bleeding-edge Linux distro. It won't run on older (or even many current) Linux desktops. People curse Linux ABI instability because new binaries aren't supported by an older operating system. It is in fact the opposite to the Windows situation, in which older software continues to run on newer operating systems, but good luck getting the latest Windows software to run on a Windows 95 desktop. People are very quick to conflate the two situations so they can score more fake internet points.
The situation is not limited to desktops. For example, a very popular commercial source forge web service does not work on browsers released more than about 10 weeks ago. The web itself has become fantastically unstable and almost unusable for anything except AI bots consuming what other AI bots spew.
But they are installed side-by-side, major versions at least.
Falling that, run inside a container
It’s on of the many reasons Windows base install is so much heavier than a typical Linux base install.
The reason Windows retains older versions of executables while Linux doesn’t is because Windows doesn’t have a package manager like Linux distros. Ok, there’s now Windows Store plus a recent-ish CLI tool that was based on one of the many unofficial package managers, but traditionally the way to install Windows application was via manual downloads and installs. So those installers would typically come bundled with any shared libraries they’d need and often have those shared libraries in the application directory. Leading to lots of duplication of libraries.
You could easily do the same thing in Linux too but there’s less of a need because Linux distribution package managers are generally really good. But some 3rd party package managers do take this kind of approach, eg Nix, Snap, etc.
So it’s not that Linux is “unstable” but more that people have approached the same problem on Linux in a completely different way.
The fact that drag-and-drop installs work on macOS demonstrates that there isn’t really a UNIX-like limitation preventing Windows-style installs. It’s more that Linux distributions prefer a different method for application installation.
As an example, the latest Atari Jaguar linker (aln) for Linux was released back in 1995. It's a proprietary, statically-linked 32-bit Linux a.out executable. To run this on a modern Linux system, you need to:
- Bump vm.mmap_min_addr from 65536 down to 4096, a privileged operation ;
- Use an a.out loader because the Linux kernel dropped support for a.out back in 2022 ;
- Possibly use qemu-user if your system doesn't support 32-bit x86.
That's the best-case scenario, because some of the old Atari Jaguar SDK Linux binaries are dynamically-linked a.out executables and you're basically stuck running ancient Linux kernels in a VM. It's at a point where someone at the AtariAge forums was seriously considering using my delinking black magic to port some of these old programs to modern Linux. It's quite telling when reverse-engineering an executable with Ghidra in order to export relocatable object files to relink (with some additional steps I won't get into) is even an option on the table.
Sure, given enough determination and piles of hacks you can probably forcefully run any old random Linux program on modern systems, but odds are that Windows (or Wine or ReactOS) will manage to run a 32-bit x86 PE program from thirty years ago with minimal compatibility tweaks. Linux (both distributions and to a lesser degree the kernel) simply don't care about that use-case, to the point where I'd be pleasantly surprised if anyone manages to run Tux the Penguin: A Quest for Herring as-is on a modern system.
That’s exactly what dynamically linked executables are: user land
> As an example, the latest Atari Jaguar linker (aln) for Linux was released back in 1995. It's a proprietary, statically-linked 32-bit Linux a.out executable.
That’s not a user land problem. That’s a CPU architecture problem. Windows solves this WOW64 which provides a compatibility layer for 32bit pointers et al.
There are 32bit compatibility layers for Linux too but they’re. It going to be going to help if you’re running an a.out file because it’s a completely different type of executable format (ie not equivalent to a 32bit statically compiled ELF).
Windows has a similar problem with COM files (the early DOS executable format). And lots of COM executables on Windows don’t work either. Windows solves this problem with emulation, which you can do on Linux too. The awkward part of Linux here is that it doesn’t ship those VMs as part of its base install, but why would it because almost no one is trying to run randomly downloaded 32bit a.out files.
To be clear, I’m not arguing that Linuxes backwards compatibility story is as good as Windows. It clearly isn’t. But the answer to that isn’t because Linux can’t be backwards compatible, it’s because Linux traditionally hasn’t needed to be. However all of the same tools Windows uses for it’s compatibility story are available to Linux for Linux executables too.
> That’s not a user land problem. That’s a CPU architecture problem. Windows solves this WOW64 which provides a compatibility layer for 32bit pointers et al.
In this specific case, it really is a user-land problem.
I've went to the trouble of converting that specific executable into a statically linked 32-bit x86 ELF executable [1], to run as-is on modern x86 and x86_64 Linux systems. Besides rebasing it at a higher virtual address and writing about 10 lines of assembly to bridge the entrypoints, it's the same exact binary code as the original artifact. Unless you've specifically disabled or removed 32-bit x86 emulation, it'll run on a x86_64 kernel with no 32-bit userland compatibility layers installed.
Just for kicks, I've also converted it into a dynamically linked executable (with some glue to bridge glibc 1.xx and glib 2.xx) and even into a x86 PE executable that can run on Windows (using more glue and MSYS2) [2].
> Windows has a similar problem with COM files (the early DOS executable format). And lots of COM executables on Windows don’t work either. Windows solves this problem with emulation, which you can do on Linux too.
These cases aren't equivalent. COM and MZ are 16-bit executables for MS-DOS [3], NE is for 16-bit Windows ; all can be officially run without workarounds on 32-bit x86 Windows systems (NTVDM has admittedly spotty compatibility, but the point stands). Here, we're talking about 32-bit x86 code, so COM/MZ/NE does not apply here (to my knowledge there never has been 16-bit Linux programs anyways).
That Windows has 32-bit compatibility out of the box and that Linux distributions don't install 32-bit compatibility layers by default is one thing, but those on Linux only really apply to programs that at best share the same vintage as the host system (and at worst only work for the same distribution). Again, try running Tux the Penguin: A Quest for Herring as-is on a modern system (be it on a 32-bit or 64-bit installation, that part doesn't matter here), I'd gladly be proven wrong if it can be done without either a substantial rewrite+recompilation or egregious amounts of thunking a 2000's-era Linux userspace onto a 2020's-era one (no, a VM doesn't count, it has to run on the host).
[1] https://boricj.net/atari-jaguar-sdk/2023/12/18/part-3.html
[2] https://boricj.net/atari-jaguar-sdk/2024/01/02/part-5.html
[3] I know about 32-bit DOS extenders, but it's complicated enough as-is without bringing those into the mix.
a.out isnt even supported in new Linux kernels so how is that a user land problem? And you then repeated my point about how it’s not a user land problem by describing how it works as an ELF. ;)
> These cases aren't equivalent. COM and MZ are 16-bit executables for MS-DOS [3], NE is for 16-bit Windows ; all can be officially run without workarounds on 32-bit x86 Windows systems (NTVDM has admittedly spotty compatibility, but the point stands). Here, we're talking about 32-bit x86 code, so COM/MZ/NE does not apply here (to my knowledge there never has been 16-bit Linux programs anyways).
You’re not listening to what I’m saying.
COM and a.out are equivalent because they’re raw formats. Even on 32 bit NT systems COM required emulation.
The problem is the file formats are more akin to raw machine code than they are a modern container format.
So yeah, one is 16 and the other 32bit but the problem you’re describing is related to the file format being unforgiving for different CPU architectures without emulation; and in many cases, disregarding the user land entirely.
By your own admission, 32bit PEs and 32bit ELFs work perfectly fine on their respective Windows and Linux systems without any hacks.
The difference here is that Windows ships WOW64 as part of the base install whereas mainstream Linux distributions doesn’t ship 32bit libraries as part of their base install. That doesn’t mean that you need hacks for 32bit though. For example on Arch it’s literally just one line in pacman.conf that you uncomment.
My point was, if you wanted to ship a Linux distribution that supported random ELF binaries then you could. And package managers like Nix prove this fact.
The reason it’s harder on Linux isn’t because it requires hacks. It’s because Linux has a completely different design for installing applications and thus backwards compatibility with random ELFs isn’t generally worth the effort.
Also it’s really not fair to argue that a.out, a format that’s defined in the 70s and looong since deprecated across all unix-like systems is proof that Linux isn’t backwards compatible. ELF has been the primary file format for nearly 30 years on Linux now and a.out was only relatively recently fully removed from the kernel.
Whereas COM has been problematic on Windows for the entirety of NT, including Windows 2000 and XP.
Is that a bad thing if it means a seamless experience for users? Storage is cheap.
However to answer your question:
Storage hasn’t always been cheap. So it used to be a bad thing. Theses days, as you rightly said, it’s less of an issue.
But if you do want to focus on present day then it’s worth noting that these days FOSS does ship a lot of dependencies as part of their releases. Either via Docker containers, Nix packages, or static binaries (eg Go, Rust, etc). And they do this precisely because the storage cost is worth the convenience.
Relatedly, at a previous job we ran an absolutely ancient piece of software that was critical to our dev workflow. The machine had an issue of some sort, so someone imaged the hard drive, booted it as a VM and we resumed business as usual. Last I heard it was still running untouched, and unmaintained.
That's not actually true; there are no guarantees. Microsoft does a best effort to ensure the majority of applications continue to work. But there are billions of applications, they're not all going to work. Many applications don't even adhere to the Win32 API properly. Microsoft will sometimes, if the app is important enough, ensure even misbehaving applications work properly.
For example, recently I tried to run Emacs' Appimage and it has a glib issue
https://github.com/probonopd/Emacs.AppImage/issues/22#issuec...
Glib is a library from the GTK project which offers utility functionality related to the GTK widget toolkit, while glibc is the GNU C Library.
Amusingly, these kinds of beyond-the-core libraries are the ones that have always caused problems for me, never actual core GNU C Library.
We're clearly not living in the same universe here.
glibc backward compatibility is horrible.
Every. Single. Time. I try to use an old binary on a modern distro, it bombs, usually with some incomprehensible error message with GLIBC in all caps in it.
And these days, you can't even link glibc statically, when you try it barks at you with vehemence.
As a matter of fact, as pointed out in the article, this particular shortcoming of glibc completely negates the work done by Linus to keep userland backward compatible at all cost.
Please post actual issues encountered, including non-paraphrased errors instead of FUD.
And if you want to statically link your libc there is nothing forcing you to use glibc. You're only stuck with glibc (and even then you don't actually need to use any functions from it yourself) if you need dynamic linnking for e.g. OpenGL/Vulkan. Also, glibc wasn't designed for static linking even before they put in safeguards against that.
1. GNU libc is an exception in the world of compatibility.
2. You can't just dump a bunch of GTK libraries next to the binary and expect it to work. These libraries often expect very specific file system layouts.
.NET 1.x tho, yeah.
(Yes, if you fiddle with the config file they might work on the .NET 4.0 runtime. But that’s not something a typical user can/will do.)