I am a huge fan of all traditional forms of supersampling, intra-frame-only anti-aliasing techniques. The performance cost begins to make sense when you realize that these techniques are essentially perfect for the general case. They actually increase the information content of each pixel within each frame. Many modern techniques rely on multiple consecutive frames to build a final result. This is tantamount to game dev quackery in my book.
SSAA is even better than MSAA and is effectively what you are using in any game where you can set the "render scale" to a figure above 100%. It doesn't necessarily need to come in big scary powers of two (it used to and made enablement a problem). Even small oversampling rates like 110-130% can make a meaningful difference in my experience. If you can afford to go to 200% scale, you will receive a 4x increase in information content per pixel.
Sure, the textures themselves aren't too high fidelity, but since the pixel shader is so simple, it's quite feasible to do tricks which would be impossible even ten years ago. I can run the game even with 8x SSAA (that means 8x8=64 samples per pixel) and almost ground truth, 128x anisotropic filtering.
There's practically zero temporal aliasing and zero spatial aliasing.[0] Now of course, some people don't like the lack of aliasing too much - probably conditioning because of all the badly running, soulless releases - but I think that this direction is the future of gaming. Less photorealism, more intentional graphics design and crisp rendering.
(edit: swapped the URL because imgur was compressing the image to shit)
if i render a linear gradient at increasingly higher resolutions, I certainly am not creating infinite information in the continuum limit obviously
If we want to get very pedantic, the information gain per pixel could actually be far more dramatic than 4x under any super sampling strategy. Assume a pathological case like a very dark room where 100% render scale doesn't have a single pixel that picks up the edge of a prop in a corner. At higher render scale maybe you start to get a handful of pixels that represent the feature. Even if you blend these poorly, you still get better than nothing. Some might argue that going from zero information to any information at all represents an infinite gain.
If it's just one pixel of it showing up, when it is picked up it is likely overrepresented, so may represent a loss rather than a gain relative to the baseline where it perfectly supersampled it would say show up with 0.1% intensity.
The all black frames may be more accurate relative to baseline than the ones that pick it up with much stronger intensity.
If the feature perfectly supersampled would show up with 50.1% intensity the frames with it may be more accurate than the frames without it, but now it will be more common.
[0] eg https://docs.nvidia.com/gameworks/content/gameworkslibrary/g...
The high triangle count of modern renders might in some cases cause the MSAA to become closer to SSAA in terms of cost and memory usage, all for a rather small AA count relative to a temporal method.
Temporal AA can handle everything, and is relatively cheap, so it has replaced all the other approaches. I haven't used Unreal TAA, does Unreal not support the various vendor AI driven TAA's?
MSAA by default handles aliasing at triangle edges, however at least in OpenGL and Vulkan (i couldn't find anything relevant in D3D11 last time i checked and D3D12 did have something that could be relevant, but i'm not sure) you can set the minimum amount of samples so you could also get some antialiasing in polygon interiors. Of course this is heavier (though still cheaper than SSAA) but IMO produces a better image than TAA.
With the tradeoff of producing a blurry mess.
It works fine, but it needs more work. AFAICT (this is the first time i see it) the page you linked does implement MSAA with deferred rendering. Personally i implemented MSAA with deferred rendering in an engine[0] i was writing ~12 years ago.
Nowadays in my current engine[1] i use Forward+-ish since i can just tell OpenGL "give me MSAA" and it just works :-P.
Yes, but is deferred still go-to method? I think MSAA is good reason to go with "forward+" methods.
It's very rare for games to be 100% forward nowadays, outside of ones specifically built for VR or mobile.
>there are some trade-offs in using the Deferred Renderer that might not be right for all VR experiences. Forward Rendering provides a faster baseline, with faster rendering passes, which may lead to better performance on VR platforms. Not only is Forward Rendering faster, it also provides better anti-aliasing options than the Deferred Renderer, which may lead to better visuals[0]
This page is fairly old now, so I don't know if this is still the case. I think many competitive FPS titles use forward.
>"forward+" methods.
Can you expound on this?
[0] https://dev.epicgames.com/documentation/en-us/unreal-engine/...
"forward+" term was used by paper introducing tile-based light culling in compute shader, compared to the classic way of just looping over every possible light in the scene.
I appreciate people standing up for classical stuff, but I don't want the pendulum swung too far back the other way either.
The output of a single pixel shader invocation is duplicated 2, 4 or 8 times and written to the MSAA surface through a triangle-edge coverage mask, and once rendering to the MSAA surface has finished, a 'resolve operation' happens which downscale-filters the MSAA surface to the rendering resolution.
SSAA (super-sampling AA) is simply rendering to a higher resolution image which is then downscaled to the display resolution, e.g. MSAA invokes the pixel shader once per 'pixel' (yielding multiple coverage-masked samples) and SSAA once per 'sample'.
Real-time graphics is all about tricks and fakery, and it's been that way since we've been racing the beam. Did people think that deferred rendering was a dirty trick when those Shrek devs figured it out and everyone stopped doing forward rendering?
As far as I know very few, if any, games currently take advantage of this feature. It's somewhat interesting that this is the case, when you think of it many PC's and laptops do have 2 GPU's, one as a Discrete Graphics card, and the other that comes integrated with the CPU itself (and over the last few year's these integrated GPU's have become powerful enough themselves to run some demanding games at low or medium settings at 60 fps)
Because before I clicked on the article (or the comments), that's the only sense I could make of the expression "d3d12" — rolling a d12, d3 times.
Competition for 3D APIs is more important than ever (e.g. Vulkan has already started 'borrowing' a couple of ideas from other APIs - for instance VK_KHR_dynamic_rendering (e.g. the 'render-pass-object-removal') looks a lot like Metal's transient render pass system, just adapter to a non-OOP C API).
But that's not even the point. Everyone could collaborate on the common API and make it better. Where were Apple and MS? Chasing their NIHs instead of doing it, so you argument is missing the point. It's not about how good it is (and it is good enough, though improving it is always welcome), it's about it being the only collaborative and shared effort. Everything else is simply irrelevant in result even if it could be better in some ways.
Meanwhile, in real world, it is Chrome that is setting the standards, and everybody is following it while holding up a fig leaf to maintain some semblance of dignity. Why? Because W3C failed in making decent standards. Is CSS and Javascript anyone's idea of good architecture?
Direction of the idea you are advocating for is completely wrong.
And the world would have been a better place. All we needed in OpenGL 5 was a thread-safe API with encapsulated state and more work on AZDO and DSA. Now, everyone has to be a driver developer. And new extensions are released one-by-one to make Vulkan just a little more like what OpenGL 5 should have been in the first place. Maybe in 10 more years we'll get there.
Just see the shading mess as well, with GLSL lagging behind, most devs using HLSL due to its market share, and now Slang, which again was contributed by NVidia.
Also the day LunarG loses out their Vulkan sponsorship, the SDK is gone.
Imagine a OpenGL which takes inspiration from D3D11 and dares to be even more user-friendly and intuitive. Instead, we got Vulkan, yay.
The problem is making gaming on GNU/Linux profitable, Vulkan will not fix that, while Proton is not a solution that will work out long term.
> Linux Beats Mac Dramatically In Humble Bundle Total Payments
https://web.archive.org/web/20150415180723/http://www.thepow...
> Linux users pay 3x that of Windows users for Humble Indie Bundle 3
https://web.archive.org/web/20111130182955/https://www.geek....
Old links precisely because it happened before or soon after Steam came to Linux.
> In a 6–2 majority, the Court ruled that Google's use of the Java APIs was within the bounds of fair use, reversing the Federal Circuit Appeals Court ruling and remanding the case for further hearing.
https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_...
Because apparently Linux folks haven't learnt the OS/2 lesson.
The main problem with Vulkan is that Apple decided to go with its own Metal API, completely fracturing the graphics space.
Considering that DX12 made it out earlier, and it took some time for Vulkan to finally relax some of its rules enough to be relatively easy to use efficiently, I think it just lost momentum.