Posted by _JamesA_ 5 days ago
1. Steam on Linux via Proton + Wayland (Niri)
2. Steam on Linux via Proton + X11 (Xfce)
3. Steam on Windows
4. Games on Linux launched via other means (it's possible I was missing out on certain flags/optimizations, but this is just about the average experience)
The biggest thing I noticed when switching to Linux was an improvement in framerate consistency, i.e. I'd have fewer situations where the framerate would drop momentarily. Games felt more solid and predictable.
The biggest thing I noticed when switching from X11/Xfce to Wayland/Niri was just an overall increase in framerate. I'd failed this jump many times over the years, so it was notable when I jumped and stayed there earlier this year.
It does feel like games take longer to launch on average, but this makes sense given the fact that it's launching via Proton/Wine.
With those admittedly limited examples though, I don't experience the same ranking in performance, but I attribute that to my non-gaming hardware vs. any problem with Linux or Proton/Wine. I play on a laptop with an Nvidia 3050 laptop GPU, and I get much better performance in Windows still. In Cities Skylines, for example, I'll get ~20 fps on Linux via Proton (but I do experience what you said, it's consistent no major spikes or drops) while on Windows I get between 45-60fps up until about 15k population or so.
Other games, despite working, remain unplayable to me due to performance. I can play Diablo 4 on windows no problem on medium settings, but even on low it's just too unresponsive on Linux.
Anyway, just my anecdotal experience. Those with dedicated gaming rigs will be more than fine with Linux, but those of us on underpowered hardware still seem better off with Windows, unfortunately.
It's better value for money for both the gamers and the devs if the devs just choose to engage with valve and get their game running perfectly under proton.
A source port that is optimized as lovingly as its Windows counterpart will probably be faster than the Windows version running via Proton, but the incentives aren't generally there to justify the costs/difficulties. Maybe some day it will be! That would be wonderful.
But until then, Proton seems like an increasingly compelling option for these compatibility layer-based ports of Windows games.
Your latest AAA open world RPG on the other hand? Yeah, you're probably going to have better luck in Proton even if it gets a native Linux port.
Even better would be to compile for linux, but use DXVK-Native (https://github.com/doitsujin/dxvk#dxvk-native) if you think migrating from DirectX to Vulkan requires too much effort.
I think the reimplementations of Windows APIs in Linux, even though alternative to the original, should have similar bottlenecks and edge-cases. So the extra QA on Windows helps the Proton version more.
On the other hand, Linux (or more accurately, the Linux desktop ecosystem) doesn't support a lot of high-end PC gaming features well: HDR, Nvidia GPUs, VR, etc.
> HDR
Already supported
> Nvidia GPUs
You have it the wrong way around. NVIDIA had issues supporting Linux, not Linux supporting NVIDIA. AMD drivers work fine, so its not a linux specific issue.
> VR
SteamVR works though?
Is it though? I confess I haven’t tried in a few weeks but until last time I did, to get HDR in games you had to start a session with `gamescope` rather than a DE, and still had to set a bunch of flags - and in some ways have a very subpar experience with problems with mouse movements and other issues I can’t recall.
I exclusively game on Linux and I find the experience far superior than doing anything on the other OS, but last I checked HDR was not actually supported.
If you want a laptop with good battery life Intel is generally the way to go.
A lot of this is due to the enormous amount of effort Valve put into improving the open source AMD drivers, which is what is used on their Steam platform.
Of course if you want CUDA you need Nvidia, but if you use Nvidia to drive your Linux desktop expect some suffering to go along with it.
Running NixOS with a pretty vanilla configuration and it has been hassle free.
I did have to disable power management at the system level because framerate suffers severely if the system sleeps and wakes back up, but I shut the system down when I’m not using it, so this was a non factor for me.
Not at a level where the experience is more fun than frustation.
It's not lacking features for me, it's lacking polish
Feature-wise the main missing feature is kernel level anticheat which I personally don't care about
I think Windows isn’t that different, just that there's more motivation for NVIDIA or Microsoft to fix those things. I recall not that long ago a combination of Windows 11, my NVIDIA RTX 40xx, my previous Dell Alienware monitor also had some issues with switching between SDR and HDR (and later Dolby Vision brought even more of a mess).
Meanwhile Android and iOS phones have been able to do it flawlessly for a while now…
All the ML people are using NVIDIA GPU's on Linux.
i3 should be pretty easy switch from sway if you haven't tried.
Hasn't Nvidia locked most functionality away from the open source drivers?
You either have the choice of using drivers that work well with Linux, or drivers that are fully featured.
In that case it might not be anything the game devs or Steam can do anything about but something you'd have to fiddle with on your system.
That’s interesting and good to know. I’m running an 10th gen i9 with an RTX 3090, so I have plenty of headroom performance wise. I’ve been wondering about Linux gaming on lower end hardware for my younger brother’s sake, and hadn’t assumed it would be worse.
One thing to note: I’ve had all kinds of issues with power management impacting performance. If I let the computer sleep/standby, I’ll get 50% slower framerate until I reboot.
Given the fact that you’re on a laptop, I wonder if power management has contributed to the slowness.
My guess is that Nvidia’s linux video drivers are still substandard.
For your system, the integrated graphics should also be quite capable. More so on Linux, thanks to the driver advantage AMD has here.
1: the hardest part was finding a bar that supported i3status-rs; not a fan of GTK bars that eat up CPU. I settled on i3bar-river.
I wish more Wayland compositors took this option, seems like a cleaner method of keeping X compatibility and not allowing Xwayland to bring down the entire compositor.
The scrollable aspect just feels so natural and intuitive to me.
If you use ZFS (single nvme) then you can beat windows load times by a fairly large margin. My husband and I have identical hardware for our gaming computers (he uses Windows and I run Linux), it's not uncommon for my computer to load games 10 seconds faster than his.
I stay away from XFS, every time I have used it in the past my entire drive have ended up scrambled within a few months. It's by far the worst file system I have ever used, not even FAT32 was that unstable for me.
Was it with any specific game? I just tried the GOG version of The Witcher 3 "Complete Edition" (which is the remastered one) with the Direct3D 12 renderer under both Xorg/Window Maker and Wayland/KDE using umu-run (essentially proton without Steam) and it had identical performance in both cases (i also tried to use Niri but it would launch in 60Hz mode and for some reason wouldn't allow the game to run at a higher framerate with vsync disabled regardless of any option i chose) in either low or high settings (which is basically what i expected since the window system shouldn't be a bottleneck unless something is either broken or you are running at something like 20000fps :-P).
> (i also tried to use Niri but it would launch in 60Hz mode and for some reason wouldn't allow the game to run at a higher framerate with vsync disabled regardless of any option i chose)
I had some issues early on related to refresh rate, and it turned out I didn't have an output defined for the correct display. The steps I took:
1. Run `niri msg outputs` to identify the Display ID and available modes. In my case: "DP-3" and "2560x1440@143.964"
2. Set up an output in niri's confid.kdl as follows:
output "DP-3" {
mode "2560x1440@143.964"
variable-refresh-rate
}
At "ultra" settings i got around 115fps which is ~5 fps lower than 117-120fps i got from Xorg/Window Maker and KDE/Wayland, though i'm pretty sure that was just the forced vsync, so in practice it seems that the window system doesn't matter much.
Did you use XFCE's desktop compositor? AFAIK XFCE's compositor isn't particularly great, some years ago when i was working on a custom game engine i had to add an explicit option to use the X11 "override redirect" flag instead of the window hint for fullscreen windows because XFCE's compositor wouldn't disable itself otherwise and the game would feel a bit laggy/inconsistent. Not sure if this has been fixed nowadays but in general it gave me a bad impression for XFCE's compositor as other compositors didn't seem to have the same issue.
Also anecdotal, but I feel like Steam games on Linux compile shaders on the CPU, and maybe not super optimized, compared to Windows where they either ship with precompiled shaders, or it might use the GPU?
Still, the very same games runs better on Wayland+Linux too for me, than on Windows, way less stutters in particular as you mention.
But I'm not sure if it's because of OS differences, or that it's so much easier to end up bloating a Windows install. I can't say I treat them the same, as one is mostly a work environment and the other one purely entertainment and creative usage.
One we can play AAA games I am literally ditching windows forever. Steamos is the best thing that has happened to gaming
We already have the technology now to do it better. A combination of only sending what info a client should have, and server-side checks. As soon as something like UT ships with that built in we can hopefully forget about this horrible hack we currently have to check for cheats.
The goal of anti-cheat isn't to stop the world's most advanced cheaters. Those are already unstoppable because they now use Direct Memory Access over the PCI-E bus, so the cheats don't even run on the same computer anymore. However since those cheaters are few and far in-between they can be handled through player reports.
The goal is to stop the mediocre cheater who simply downloaded a known cheat from a cheating forum. If you don't stop those you'll get such a large wave of cheaters that you can't keep up with banning them quickly enough.
As far as I see the only way around not sharing anything that's outside of the immediate perception of a player is to have the audio and graphics be entirely rendered server-side.
I imagine that most game devs just look at the incredible amount of work this takes to implement and complexity it adds, and decide to not bother. Valorant can do it because the game itself is low complexity, the developer has deep pockets, and also the added competitive integrity is valuable.
It's infeasible for the server to keep track of each player and do frustum and raycasting to every other player to check who can see who every frame.
Culling out of view entities also has the problematic effect of when a player spins around you now have to stream in several big chunks of world state in the few milliseconds before the user clicks to get that 180 no-scope.
Working on mostly server platforms, I had forgotten that IOMMU enablement (and, where relevant, enforcement) was not the default.
Consumer hardware and software is terrifying.
This is enforced by a greatly enriched TPM (and it's willingness to unwrap credentials). We have trust several layers of firmware and OS software, but the same mechanism allows us to ensure that known-bad versions of those aren't part of the stack that booted.
If I wanted secure games (and the market would tolerate it), I'd push for enforcement of something similar in the consumer space.
The only thing you're getting by saying "no IOMMU" is "I want any devices in my machine to be able to do anything, not just what I want them restricted to".
Hooray, freedom!
I mean that the presence or absence of an IOMMU doesn't impact whether owners of hardware have control over their hardware.
It just means that the owner of the machine is able to limit what memory the devices in their system are able to access, in the same way that MMUs limited what memory every process on your system could access.
Do you have any good resources with keeping up with this kind of thing? Seems like a fun topic to learn about
For example: in competitive shooters (where cheaters are most prevalent) you can't have things appearing out of thin air. The client needs to know about things ahead of time to play sounds and to give other environmental hints.
The only way to be really fair is for everybody to Stream the game at the same res, frame rate and latency.
If you are asking why games like counterstrike don't have limits on online play, that's mostly a commercial question. Would those games be as popular if they limited performance to what was achievable for minimum specs? I certainly wouldn't want to play at 1920x1080 on my nice widescreen monitor, but setting the minimum to a $1500 monitor and the hardware to drive it would guarantee very few players.
[0] https://www.speedrun.com/fallout_4?h=Any-Full-game&rules=gam...
Edit:typo
Some games do impose limits though, for example Overwatch doesn't allow you to use an aspect ratio larger than 16:9 and selecting a wider aspect ratio actually cuts down on your vertical field-of-view rather than granting you more horizontal field-of-view. This lessens the potential advantage of ultra-wide monitors.
That's still gonna be annoying for players, but it'll greatly decrease incidence, and if reporting a player for botting requires buying and hacking a new controller... It should be quite effective.
Let’s just say that my finals experience isn’t the same as yours! ;)
I think that traditional kernel-level anticheat is going away. But the reason is more that when CrowdStrike caused mass outage, Microsoft stated that they want to provide standard interfaces for security sensors, and forbid kernel-level access otherwise (and anticheat can be considered a kind of security sensor too).
If these interfaces become standardized then Valve/Linux could in principle implement them too.
Any anti-malware software ends up ultimately being a cat and mouse game, but that doesn't mean we stop updating our signature updates.
Once you get to match making, global ranks, etc it's just getting too sweaty and ruined by cheating/low trust/etc.
You can see this in existing games with current games with community servers. GTA V's modded FiveM and CS2 Face-IT include more anti-cheats, not less.
I want good balanced matches with players of my similar skill level via matchmaking.
They say they don't support Linux because it's too complicated to be worth the ROI. Really, it's that they don't want to boost a platform where Steam is far and away the default store.
ETA: EAC still supports Linux gaming today, but the rumors remain that Epic could remove that at their whim.
Just avoid it.
¹: Rainbow Six: Siege and Apex Legends, respectively.
The two most popular ACs by far are Easy anti cheat and Battle eye which have natively supported Linux for years, but it is entirely up to the game devs to enable it.
About 40% of all games with AC are working areweanticheatyet.com
Then, there are things like head tracking which are either another dedicated peripheral which may or may not get drivers, or a set of apps which feed from a webcam and output the signal to a standard driver that games know to check for.
Finally, most 3rd party add-ons have custom installers, and I'm guessing most of them won't have a working Linux version. So, while I'm sure it's possible to run, say, a vanilla X-Plane on a non-Windows installation with no peripherals/apps/add-ons, I just see a mountain of work to get a normal, heavily custom installation working.
I know that this isn’t an easy solution/doesn’t go against your argument, because it isn’t download-and-run simple, but discord’s version can be modified with no consequences in a build_info.json file. I used to do it manually, back when they updated it every once-in-a-while, but due to their current tendency to push updates every few days or so, I’ve made a few-line bash script to fetch the latest version (thank you httptap) and patch the file for me. For screen sharing, I use whatever current discord client on GitHub supports it for Wayland, which usually has the added benefit of not limiting quality and framerate options.
But yes, you do have a point, it’s not just ‘as simple’ as it is under Windows - when Windows works properly.
It Just Works.
Think about it from the POV of a Windows user, especially one who has never used Linux before, and especially one who doesn't know what HN is. To install a program, the first thing you're going to do is type "discord" into your browser, and go to their website. Discord's website doesn't suggest that there's a better option. It just gives you a DEB file.
The package system is very important to learn in Linux. People have 12 ways to install an app, and they are far from equal.
This missing piece is sort of a fun "whatever happened to VAC and why hasn't it kept up with the times?"
It seems like Linux would be a good excuse to reinvest in VAC and make it a bigger competitor to the current favorites like Easy Anti-Cheat (EAC).
Not a big issue if you're just using kb/mouse/controller but you can get into the weeds with VR, flight sticks, wheels, etc.
Does Valve run a SteamOS CI/CD farm? I could see a Rust based template and library for calling into this set of APIs that you could upload your well structured project and it would build and test for all platforms. Rust would just be the skeleton, your game logic could be in anything Rust could link to.
Test your game to make sure it works on the Steam Deck and avoid features that don't work on Proton, but you still have to primarily target Windows.
Yes call X is faster than call Y in Proton, but that could also be because its only 50% written and skips a bunch of side effects that will be created after you have written your game.
Therefore you need to view Proton as a potential moving target. Not that Windows isn't too, but its just not as clear cut as is being claimed.
I know this because I pin some games against older versions of Proton because they work better/faster.
The Windows API perf topology is astronomically large.
That Proton is faster than Windows can't be universally true, but if you stick to that subset that has good conformance with Windows APIs and is also faster, that should be the target (and everything is moving of course).
Cross benchmarking games between Windows and Proton, taking API traces and finding those subsets of APIs can usage styles will enable a game developer to target those Proton APIs that are both conformant and fast. It is that subset of Windows APIs (temporal spatial structural usage) that I am suggesting being the target. The API linting tool might itself compare API traces. In fact for those API traces from performant AAA games, I could see generating a header file that only exposes the same API as used by that game.
The ABI lives below Steam.
I never came up with a good explanation for that.
Disk cache, to be precise.
I'm not sure how true that is, because in the Windows XP days most of us wouldn't have had enough RAM to spare to do that.
Depending on the VM technology they use they offer a variety of different caching mode and configurations, but the basic three approaches that most everybody offers are going to be something like;
"writeback" means that when the guest's storage data ends up in the host's cache it is reported back to the guest as 'written'. This means that from the guest's perspective the disk is written to, but in actuality the data is still floating around in memory. If the guest wants data to be 'safe' they need to issue additional flush commands.
"writethrough" means that the host is using its memory for caching file system, but that writes are reported as 'competed' only when they have been committed to actual disk.
and "none" means the cache is used as little as possible.
So if your guest's virtual disk is in 'writeback' mode it isn't actually writing to real disk. It is writing to memory. Which is going to be very fast up to the point were the cache on the host is exhausted.
Certainly Windows could lie to applications and not write information to disk and keep it in memory much longer then it normally would but that would defeat some of the assurances that file systems are supposed to offer to applications.
'writeback' would be closer to what Windows already implements on the OS level, but because Linux is just plain faster it should improve performance somewhat. Microsoft can only work in improving the performance of Windows to get the same results, but Linux is pretty hard to beat.
'none' is what I use when running Linux on Linux because having two layers of cache is just kinda wasteful and doesn't result in real improved performance.
Pretty much. Just speculating because I don't know how your systems was configured back in the day.
But all of what I said applies to most VM solutions.
If you are dealing with enterprise-grade hardware it isn't a bad things. Keep in mind that typically OSes are going to assume that they are the only things operating on storage. So if you have like 30 windows boxes all trying to write to shared disk at the same time it can lead to some bad behavior if you are not using a write cache.
The system has battery backup and typically they expect you to use shared SAN or NAS with multipath and/or bonded network interfaces for redundancy as well as having backups. So the chances of data loss is a lot less then typical consumer hardware and it gives the hosting OS better chances at optimizing and scheduling writes properly.
Also remember the guest OS still has the option to send a 'flush' command to disk, which would ensure the writes complete regardless.
> It's been many years, but I don't imagine I would have chosen an option like that. I am very surprised VMWare would default to such behavior.
This is pretty normal. You'll see the same sort of options with hardware RAID devices and such things with their own internal battery-backed cache.
Not out of box - games require mild tweaking but nothing wildly challenging. Add parameter to launch command line etc. The proton database & comments on there usually explain what tweaks the game needs
Don't think I'll switch back
But even then, assuming that is true, if they're pretty much the same would people care about maybe some fog looks a little different but you get an extra 15-20fps in a game? I think a lot of people would still prefer the boost in frames.
Proton supplies a DLL that implements the Win32 API using Linux syscalls. Windows supplies a DLL that implements that Win32 API using Windows syscalls that you're not really supposed to use directly.
so 'translation layer' is not that unfair.
Philosophically its still a translation layer though. It doesn't really care about correctness if the no apps depend on it. Success is in meaningfully running client software. The implementation of the Windows Libraries are just a way to get there.
For so long it was one of those "and now you have two problems" technologies and now it looks like it's the slow blade that could actually kill Windows.
I'm a Mac guy now mainly because of my job and I like UNIX-y stuff now, but of course, gaming is even more lacking than Linux.
We're so close. Once AAA releases and GPU drivers get there, it's over the cliff, and I could see that being in the next five years.
Check whatever you want to play at https://www.protondb.com/ - chances are, if it isn't intentionally borked with anticheat, it runs just fine. Looking at the top 300 games by Steam player count, 17 don't work, and probably 5 of those are utilities (like Crosshair X and Lossless Scaling).
> Because the NT OS/2 I/O system is asynchronous by nature, the ability to make a request and then have it completed at a later time makes it natural for implementing oplocks. Further, because synchronization is required by the file system to determine when the caller has completed its oplock update transfers, the file system can use this feature to block open requests to a file by queueing the I/O Request Packet (IRP) to its internal file control structure until the oplock owner lets it know that it is finished.
Use DevDrive in a virtual disk or secondary volume, there are significant performance gains for things like git, nodejs modules, etc.
It helps to know the system. The perf would be an issue with any file system on Windows due to the file system filter architecture.
Locking is a function of the NT executive and not of the file system. It was a design decision. I’ll see if I can dig up the reasoning later.
E.g. the difference between the Lenovo and Asus Win11 drivers is sometimes bigger than the difference of the faster Windows driver to Linux.
It's also not all that surprising though, there's a lot of very smart people working on Proton while the general quality level in the Windows ecosystem is slowly but steadily declining.
I also wouldn't be all that surprised if running a D3D11 or D3D12 game on a Proton-layer on Windows would be faster than running that same game without Proton. Sometimes Proton might have workarounds for 'API abuse' problems of specific games which the native D3D implementation or driver doesn't have.