Top
Best
New

Posted by ryandrake 12/16/2025

No Graphics API(www.sebastianaaltonen.com)
840 points | 183 commentspage 3
fngjdflmdflg 7 days ago|
I think this almost has to be the future if most compute development goes to AI in the next decade or so, beyond the fact that the proposed API is much cleaner. Vendors will stop caring about maintaining complex fixed function hardware and drivers for increasingly complex graphics APIs when they can get 3x the return from AI without losing any potential sales, especially in the current day where compute seems to be more supply limited. Game engines can (and I assume already do) benefit from general purpose compute anyway for things like physics, and even for things that it wouldn't matter in itself for performance or would be slower, doing more on the GPU can be faster if your data is already on the GPU, which becomes more true the more things are done on the GPU. And as the author says, it would be great to have an open source equivalent to CUDA's ecosystem that could be leveraged by games in a cross platform way.
vegabook 7 days ago||
ironically, explaining that "we need a simpler API" takes a dense 69-page technical missive that would make the Kronos Vulkan tutorial blush.
Pannoniae 7 days ago||
It's actually not that low-level! It doesn't really get into hardware specifics that much (other than showing what's possible across different HW) or stuff like what's optimal where.

And it's quite a bit simpler than what we have in the "modern" GPU APIs atm.

mkoubaa 7 days ago||
I don't understand why you think this is ironic
zbendefy 7 days ago||
>The user writes the data to CPU mapped GPU memory first and then issues a copy command, which transforms the data to optimal compressed format.

Wouldnt this mean double gpu memory usage for uploading a potentially large image? (Even if just for the time the copy is finished)

Vulkan lets the user copy from cpu (host_visible) memory to gpu (device_local) memory without an intermediate gpu buffer, afaik there is no double vram usage there but i might be wrong on that.

Great article btw. I hope something comes out of this!

bullen 7 days ago||
Personally I'm staying with OpenGL (ES) 3 for eternity.

VAO is the last feature I was missing prior.

Also the other cores will do useful gameplay work so one CPU core for the GPU is ok.

4 CPU cores is also enough for eternity. 1GB shared RAM/VRAM too.

Let's build something good on top of the hardware/OSes/APIs/languages we have now? 3588/linux/OpenGL/C+Java specifically!

Hardware has permanently peaked in many ways, only soft internal protocols can now evolve, I write mine inside TCP/HTTP.

theandrewbailey 7 days ago|
> Also the other cores will do useful gameplay work so one CPU core for the GPU is ok.

In the before times, upgrading CPU meant eveything runs faster. Who didn't like that? Today, we need code that infinitely scales CPU cores for that to remain true. 16 thread CPUs have been around for a long time; I'd like my software to make the most of them.

When we have 480+Hz monitors, we will probably need more than 1 CPU core for GPU rendering to make the most of them.

Uh oh https://www.amazon.com/ASUS-Swift-Gaming-Monitor-PG27AQDP/dp...

bullen 7 days ago||
I'm 60Hz for life.

Maybe 120Hz if they come in 4:3/5:4 with matte low res panel.

But that's enough for VR which needs 2x because two eyes.

So progress ends there.

16 cores can't share memory well.

Also 15W is peak because more is hard to passively cool in a small space. So 120Hz x 2 eyes at ~1080 is limit what we can do anyways... with $1/KWh!

The limits are physical.

thescriptkiddie 12/16/2025||
the article talks a lot about PSOs but never defines the term
flohofwoe 12/16/2025||
"Pipeline State Objects" (immutable state objects which define most of the rendering state needed for a draw/dispatch call). Tbf, it's a very common term in rendering since around 2015 when the modern 3D APIs showed up.
CrossVR 12/16/2025||
PSOs are Pipeline State Objects, they encapsulate the entire state of the rendering pipeline.
MaximilianEmel 7 days ago||
I wonder if Valve might put out their own graphics API for SteamOS.
m-schuetz 7 days ago|
Valve seems to be substantially responsible for the mess that is Vulkan. They were one of its pioneers from what I heard when chatting with Vulkan people.
jsheard 7 days ago|||
There's plenty of blame to go around, but if any one faction is responsible for the Vulkan mess it's the mobile GPU vendors and Khronos' willingness to compromise for their sake at every turn. Huge amounts of API surface was dedicated to accommodating limitations that only existed on mobile architectures, and earlier versions of Vulkan insisted on doing things the mobile way even if you knew your software was only ever going to run on desktop.

Thankfully later versions have added escape hatches which bypass much of that unnecessary bureaucracy, but it was grim for a while, and all that early API cruft is still there to confuse newcomers.

torginus 3 days ago||
Which is very weird, considering mobile GPUs by their very nature use unified memory, so supporting things like bindless and GPU pointers (which in this case are just pointers) would be more straightforward than on PC, where basically you have 2 computers with separate memory spaces connected via PCI Express
MindSpunk 2 days ago||
Bindless has nothing to do with UMA and everything to do with the fundamentals of how your GPU accesses memory. Older GPUs had limited register spaces where they could store texture and buffer references, the hardware had no instructions to read textures or buffers outside of the references in those small set of hardware registers. They just weren't able to issue a request to the texture unit to read any old texture, it had to be in that set. The GPU itself wasn't able to update those registers, only the CPU could.

UMA or not doesn't matter, desktop GPUs have MMUs and are perfectly capable of reading the CPUs memory in a unified address space (even back then).

pjmlp 7 days ago|||
Samsung and Google also have their share, see who does most of Vulkanised talks.
awolven 7 days ago||
Is this going to materialize into a "thing"?
flohofwoe 6 days ago|
I see it more like a rallying call to the GPU vendors, Microsoft, Khronos and Apple what the next major versions of D3D, Vulkan and Metal should look like.
greggman65 7 days ago||
This seems tangentially related?

https://github.com/google/toucan

xyzsparetimexyz 7 days ago||
This needs an index and introduction. It's also not super interesting to people in industry? Like yeah, it'd be nice if bindless textures were part of the API so you didn't need to create that global descriptor set. It'd be nice if you just sample from pointers to textures similar to how dereferencing buffer pointers works.
ginko 7 days ago|
I mean sure, this should be nice and easy.

But then game/engine devs want to use the vertex shader producing a uv coordinate and a normal together with a pixel shader that only reads the uv coordinate (or neither for shadow mapping) and don't want to pay for the bandwidth of the unused vertex outputs (or the cost of calculating them).

Or they want to be able to randomly enable any other pipeline stage like tessellation or geometry and the same shader should just work without any performance overhead.

Pannoniae 7 days ago|
A preprocessor step mostly solves this one. No one said that the shader source has to go into the GPU API 1:1.

Basically do what most engines do - have preprocessor constants and use different paths based on what attributes you need.

I also don't see how separated pipeline stages are against this - you already have this functionality in existing APIs where you can swap different stages individually. Some changes might need a fixup from the driver side, but nothing which can't be added in this proposed API's `gpuSetPipeline` implementation...

More comments...