PNG is a highly structured file format internally. It borrows design ideas from formats like EA's Interchange File Format in that it contains lists of chunks with fixed headers encoding chunk type amd length. Decoders are expected to parse them and ignore chunk types they do not support.
There is also some leeway for how encoding is done as long as you end up with a valid stream of bits at the end (called the bit stream format), so encoders can improve over time. This is common in video formats. I don’t know if a lossless image format would benefit much from that.
In this case there could be an embedded reduced colour space image next to an extended color space one
Lossless AVIF is not competitive.
However, lossless WEBP does not support indexed color images. If you need palettes, you're stuck with PNG for now.
And buffer sizes aren't handled in a good way. You have to provide pre-allocated memory, guessing how big it is supposed to be. Then you get a "not big enough" error. This is a guessing game, not a good design. You're forced to overshoot, then shrink the buffer afterwards.
---
In different APIs, there tends to be a function you call to get the required buffer size. For example, many Win32 API functions make you call them with a buffer size of 0, then you get the actual required size back. Another possibility is having the library allocate the memory, and return the allocated buffer to you. Since cross-module memory management is hairy (different `malloc` implementations can't interoperate), some APIs let you provide the `malloc`, `realloc`, and `free` functions.
I designed the article to be accessible and understandable for the average person. So I took some liberties like showing only HDR primaries and not deep diving into HDR transfer functions. People understand the primaries intuitively.
But you are right that a wide color image could also use those same primaries without being HDR.
My goal was to be as truthful as possible while still being digestible at a glance.
In the article, I linked to Chris Lilley's post which explains it more thoroughly for the technical people.
After 20 years of success, we can't resist the temptation to mess with what works.
For example 16bit (integer) TIFF files 'with headroom', i.e. where some bits were used to represent data over 1.0 (HDR) was a common approach for VFX work in the 90's.
16bit float TIFF is also thing since 33 years. Adobe DNG is modeled after TIFF. High end offline renderers have traditionally been using TIFF (with mip-maps) to store textures.
TIFF supports tags so primaries and white point or a known color space name can be stored in the file.
The format is so versatile, it is used everywhere.
And of course it also supports indexed color, i.e. a non-negotiable feature at the time PNG was introduced.
PNG was meant to replace GIF. Instead of looking what was already there some group of "experts" and "enthusiasts" (quote Wikipedia) succumbed to their NIH complexes. If licensing/patent woes over compression algorithms had been a motivator, why not just add a new one to TIFF?
The fact that PNG stores straight/unpremultiplied alpha says everything if you know anything about imaging in computer graphics.
And the fact that the updated format spec just released didn't address this tells you everything you need to know about the group in charge of that, today.
PNG is the VHS of image formats. It should have never seen the light day of in the first place nor the adoption it did.
Yeah, I love the fact that you can embed a PDF file inside a TIFF.
> And the fact that the updated format spec just released didn't address this tells you everything you need to know about the group in charge of that, today.
What does it say? That they are naive or have the wrong priorities? Their rationale for this seems quite reasonable to me: https://www.w3.org/TR/PNG-Rationale.html#R.Non-premultiplied...
E.g. an associated pixel with the 8bit/channel RGBA value 255, 0, 0, 0 (glowing red).
Because PNG can only store associated data a reader must associate before displaying. And that will give you a value of 0, 0, 0 after (black instead of additive red). See e.g. [1] why this matters.
Additionally the PNG spec does not specify if the alpha is linear. Some PNG readers/writer assume it is, some assume it has gone through/should go through an sRGB transfer curve instead. It mostly works until it doesn't.
The fact that the spec. doesn't specify which one it should be is another telltale sign that it was written by people unaware of the subtleties of image processing.
I understand that unassociated alpha gives you more precision in 8bit and since people wanted to e.g. store color ramps (with alpga) in PNG at the time (pre-SVG) and most image processing software (i.e. mainly Photoshop then) would not dither gradients for 8bit, this really mattered.
But it's 2025. And when 16bit PNG got introduced this should have definitely had associated (and explicitly linear) alpha.
[1] https://academysoftwarefdn.slack.com/archives/C05782U3806/p1...
Quote the relevant section, Slack requires a log-in.
> Additionally the PNG spec does not specify if the alpha is linear.
Section 12.1 of the PNG spec seems to specify exactly that: “gamma does not apply to alpha samples; alpha is always represented linearly.”
They do not mention precision at all in their rationale for that: “We standardized on non-premultiplied alpha as being the lossless and more general case.”
> And when 16bit PNG got introduced...
PNG has supported 16-bits per component since it was first introduced (see version 1.0 of the spec or RFC 2083).
How is this "more general"? Unpremultiplied is actually lossy (not so as far as precision goes but critically so if you talk about meaning/information).
> PNG has supported 16-bits per component since it was first introduced (see version 1.0 of the spec or RFC 2083).
True, my bad. But there have been many updates to the spec over the years.
I btw found a mention of alpha to be assumed to be linear in [1] but in a comment in a sample code snippet.
Quote:
/*
* Compositing is necessary.
* Get floating-point alpha and its complement.
* Note: alpha is always linear; gamma does not
* affect it.
*/
On the note of alpha: [2] is a good piece to read to understand why this matters, specifically the section "PNG cannot store all clamped linear values…".
[1] https://www.w3.org/TR/PNG-Encoders.html#E.Alpha-channel-crea...
[2] https://www.realtimerendering.com/blog/png-srgb-cutoutdecal-...
Not sure how HDR encoding works, but my impression is that you can set a nominal white point other than (1, 1, 1) in your specified colorspace. This is an extension, but orthogonal to specifying the colorspace itself and the gamut.
But wide color gamut was already possibly in PNG via ICC profiles (HDR was not). And those primaries I showed could have been used in a wide color image.
So the image is a bit misleading or red-flag-y to experts who know. But to the average person, I think it is as truthful as I can be without getting too deep in the weeds.
The continued popularity of non-HDR 1080p screens on laptops is a bleak reminder that most people would rather save a couple hundred bucks than buy HDR capable hardware.
HDR is great for TVs and a nice-to-have on phones (who mostly get it for free because OLEDs are the norm these days), but display technology only advances as much as its availability in low-cost devices.
It has, but WWW is still de facto sRGB, and will be for a long time still. But again, I'm not strictly opposed to evolving PNG, I just hope they don't ruin it in the process, because that's usually what happens when something gets update for a modern audience. I'll be watching with mixed optimism and concern.
How can you call this basic fail a success?