This worries me. Because presumably, changing the compression algorithm will break backwards compatibility, which means we'll start to see "png" files that aren't actually png files.
It'll be like USB-C but for images.
[1] https://github.com/w3c/png/issues/39#issuecomment-2674690324
https://svgees.us/blog/img/revoy-cICP-bt.2020.png uses the new colour space. If your software and monitor can handle it, you see better colour than I, otherwise, you see what I see.
Now, PNG datatype for AmigaOS will need upgrading.
It could be horrible in principle, but actually isn't.
The PNG format is specifically designed to allow software to read the parts they can understand and to leave the parts they cannot. Having an extensible format and electing never to extend it seems pointless.
This proves OP analogy regarding USB-C. Having PNG as some generic container for lossless bitmap compression means fragmentation in libraries, hardware support, etc. The reason being that if the container starts to support too many formats, implementations will start restricting to only the subsets the implementers care about.
For instance, almost nobody fully implements MPEG-4 Part 3; the standard includes dozens of distinct codecs. Most software only targets a few profiles of AAC (specifically, the LC and HE profiles), and MPEG-1 Layer 3 audio. Next to no software bothers with e.g. ALS, TwinVQ, or anything else in the specification. Even libavcodec, if I recall correctly, does not implement encoders for MPEG-4 Part 3 formats like TwinVQ. GP's fear is exactly this -- that PNG ends up as a standard too large to fully implement and people have to manually check which subsets are implemented (or used at all).
And now think of the younger generation that has grown up with smartphones and have been trained to not even know what a file is. I remember this story about senior high school students failing their school tests during covid because the school software didn't support heif files and they were changing the file extension to jpg to attempt to convert them.
I have no trust the software ecosystem will adapt. For instance the standard libraries of the .net framework are fossilised in the world of multimedia as of 2008-ish. Don't believe heif is even supported to this day. So that's a whole bunch of code which, unless the developers create workarounds, will never support a newer png format.
But that's typical for file extensions. Consider EXE – it is probably an executable, but an executable for what? Most commonly Windows – but which Windows version will this EXE run on? Maybe this EXE only works on Windows 11, and you are still running Windows 10. Or maybe you are running x86-64 Windows, but this EXE is actually for ARM or MIPS or Alpha. Or maybe it is for some other platform which uses that extension for executable files – such as DOS, OS/2, 16-bit Windows, Windows CE, OpenVMS, TOPS-10, TOPS-20, RSX-11...
.html, .js, .css – suggest to use a web browser, but don't tell you whether they'll work with any particular one. Maybe they use the latest features but you use an old web browser which doesn't support them. Maybe they require deprecated proprietary extensions and so only work on some really old browser. Maybe this HTML page only works on Internet Explorer. Maybe instead of UTF-8 it is in some obscure legacy character set which your browser doesn't support.
.zip – supports extensible compression and encryption methods, your unzip utility might not support the methods used to compress/encrypt this particular zip file. This is actually normal for very old ZIP files (from the 1980s) – early versions of PKZIP used various deprecated compression mechanisms, which few contemporary unzip utilities support. The format was extended to 64-bit without changing the extension, there's still a lot of 32-bit only implementations out there. ZIP also supports platform-specific file attributes–e.g. PKZIP for z/OS creates ZIP files which contain metadata about mainframe data storage formats, unzip on another platform is going to have no idea what it means, but the metadata is actually essential to interpreting the data correctly (e.g. if RECFM=V you need to parse the RDWs, if RECFM=F there won't be any)
.xml - okay, it is XML – but that tells you nothing about the actual schema. Maybe you were expecting this xml file to contain historical stock prices, but instead it is DocBook XML containing product documentation, and your market data viewer app chokes on it. Or maybe it really is historical stock prices, but you are using an old version of the app which doesn't support the new schema, so you can't view it. Or maybe someone generated it on a mainframe, but due to a misconfiguration the file came out in EBCDIC instead of ASCII, and your app doesn't know how to read EBCDIC, yet the mainframe version of the same app reads it fine...
.doc - people assume it is legacy (pre-XML) Microsoft Word: every version of which changed the file format, old versions can't read files created with newer versions correctly or at all, conversely recent versions have dropped support for files created in older versions, e.g. current Office versions can't read DOC files created with Word for DOS any more... but back in the 1980s a lot of people used that extension for plain text files which contained documentation. And it was also used by incompatible proprietary word processors (e.g. IBM DisplayWrite) and also desktop publishing packages (e.g. FrameMaker, Interleaf)
.xmi – I've seen this extension used for both XML Model Interchange (XML-based standard for exchanging UML diagrams) and XMIT (IBM mainframe file archive format). Because extensions aren't guaranteed to be unique, many incompatible file formats share the same extension
.com - is it an MS-DOS program, or is it DCL (Digital Command Language)?
.pic - probably some obscure image format, but there are dozens of possibilities
.img – could be either a disk image or a visual image, either way dozens of incompatible formats which use that extension
.db – nowadays most likely SQLite, but a number of completely incompatible database engines have also used this extension. And even if it is SQLite, maybe your version of SQLite is too old to read this file because it uses some features only found in newer versions. And even if SQLite can read it, maybe it has the wrong schema for your app, or maybe a newer version of the same schema which your old version that app doesn't support, or an old version of the schema which the current version of the app has dropped support for...
Has anyone ever used .exe for anything other than Windows?
Under Windows 95/98/Me, most command line tools were MS-DOS executables. Their support for 32-bit Windows console apps was very poor, to the extent that the input and output of such apps was proxied through a 16-bit MS-DOS executable, conagent.exe
First time in my life I ever used GNU Emacs, it was an OS/2 exe. That's also true for bash, ls, cat, gcc, man, less, etc... EMX was my gateway drug to Slackware
Did you know that Microsoft Windows originally ran on top of the much older MS-DOS, which used EXE files as one of its two executable formats? Most Windows users had lots and lots of EXE files which were not Windows executables, but instead DOS executables. And then came Windows 95, which introduced 32-bit Windows executables, but kept the same file extension as 16-bit Windows executables and 16-bit DOS executables.
Same is also true for the most advanced codecs. MPEG-* family and MP3 comes to my mind.
Nothing stops PNG from defining a "set of decoders", and let implementers loose on that spec to develop encoders which generate valid files. Then developers can go to town with their creativity.
Proprietary or open, any visual codec is a battleground. Even in commercial settings, I vaguely remember people saying they prefer the end result of one encoder over another, for the same video/image format, not unlike how photographers judge cameras by their colors.
So maybe, this flexibility to PNG will enable or encourage people to write better or at least unorthodox encoders which can be decoded by standard compliant ones.
Regarding the potential for fragmentation of the png ecosystem the alternative is a new file format which has all the same support issues. Every time you author something you make a choice between legacy support and using new features.
From a developer perspective, adding support for a new compression type is likely to be much easier than implementing logic for an entirely new format. It's also less surface area for bugs. In terms of libraries, support added to a dependency propagates to all consumers with zero additional effort. Meanwhile adding a new library for a new format is linear effort with respect to the number of programs.
Not Sure what youre talking abouz.
If you want to check yours: mediainfo **/*.mp4 | grep -A 2 '^Audio' | grep Format | sort | uniq -c
https://en.wikipedia.org/wiki/TwinVQ#TwinVQ_in_MPEG-4 tells the story of TwinVQ in MPEG-4.
Yeah, we know. That's terrible.
If you've created an extensible file format, but you never need to extend it, you've done everything right, I'd say.
That's what I would call really extensible, but then there may be no limits and hacking/viruses could have easily a field day.
Will sooner or later be used to implement RCEs. Even if you could do a restriction as is done for eBPF, that code still has to execute.
Best would be not to extend it.
And considering we already have plenty of more advanced competing lossless formats, I really don't see why "feed a BMP to deflate" needs a new, incompatible spin in 2025.
More generally, PNG has a simple feature to specify what's needed. A file consists of a number of chunks, and one bit in the chunk specifies whether that chunk is required for display. All of the extensions I've seen in the past decades set that bit to "optional".
For example, this update includes a chunk containing EXIF data. As you'd expect, the exif chunk sets that bit to "optional".
Other than JXL which still has somewhat spotty support in older software? TIFF comes to mind but AFAIK its size tends to be worse than PNG. Edit: Oh right OpenEXR as well. How widespread is support for that in common end user image viewer software though?
In an ideal world, yes. In practice however, if some field doesn't change often, then software will start to assume that it never changes, and break when it does.
TLS has learned this the hard way when they discovered that huge numbers of existing web servers have TLS version intolerance. So now TLS 1.2 is forever enshrined in the ClientHello.
https://pico-8.fandom.com/wiki/P8PNGFileFormat
Actual cases of proprietary chunks include iDOT from Apple (apparently a performance optimization for plain images)
https://www.hackerfactor.com/blog/index.php?/archives/895-Co...
and the Macromedia Fireworks save files
https://stackoverflow.com/questions/4242402/the-fireworks-pn...
So then it was pointless for PNG to be extensible? Not sure what your argument is.
The main use case for PNG is web browsers and all of them seem to be on board. Using old web browsers is a bad idea. You do get these relics showing up using some old version of internet explorer. But some images not rendering is the least of their problems. The main challenge is actually going to be updating graphics tools to export the new files. And teaching people that sRGB maybe isn't good enough any more. That's going to be hard since most people have no clue about color spaces.
Anyway, that gives everybody plenty of time to upgrade. By the time this stuff is widely used, it will be widely supported. So, you kind of get forward compatibility that way. Your browser already supports the new format. Your image editor probably doesn't.
It's not, most images you encounter on the web need better compression.
The main PNG use case is to store lossless images locally as master copies that are then compressed or in workflows where you intend to edit and change them where compressed formats would degrade the more they were edited.
This is news to me. I'm pretty sure the main use case for PNG is lossless transparent graphics.
There are about 3.6 billion people surfing the web and experiencing PNGs. That use case, consuming PNGs, seems to dwarf the perhaps 100 million (somewhat wild guess) graphic designers, web developers, and photo editing professionals who manipulate images for publishing (in any medium) or archiving.
If, on the other hand, you're considering the use cases envisioned by PNG's creators, or the use cases that interest the people processing or publishing images, yes, these people are focused on format itself and its capabilities.
I suspect this particular use of "use case" isn't terribly clear. Also these two considerations are not incompatible.
That being said, they also can do dumb things however, right at the end of the sentence you quote they say:
> we want to make sure we do it right.
So there's hope.
That's just changing an implementation detail of the encoder, and you don't need spec changes for that e.g. there are PNG compressors which support zopfli for extra gains on the DEFLATE (at a non-insignificant cost). This is transparent to the client as the output is still just a DEFLATE stream.
EG your GPU and monitor both have a USB-C port. Plug them together with the right USB cable and you'll get images displayed. Plug them together with the wrong USB cable and you won't.
USB 3 didn't have this issue - every cable worked with every port.
I believe the problem here is that you will have PNG images that “look” like you can open them but can’t.
Labelling is a poor band-aid on the root problem - consumer cables which look identical and fit identically should work wherever they fit.
There should never have been a power-only spec for USB-C socket dimensions.
If a cable supports both power and data, it must fit in all sockets. If a cable supports only power it must not fit into a power and data socket. If a cable supports only data, it should not fit into a power and data socket.
It is possible to have designed the sockets under these constraints, with the caveat that they only go in one way. I feel that that would have been a better trade-off. Making them reversible means that you cannot have a design which enforces cable type.
Well, yes.
Why can't you use a power+data cable for the vape (or whichever appliance takes both)? What's the deal-breaker here?
The alternative is labeling, or plugging cables in to see if they do what you want them to do.
Both are a poor user interface.
That's even more confusing than the current state of affairs. If my phone has power and data socket, then I cannot use power only cable to only charge it? Presumably with the charger that has power only socket. So I need a cable with two different ends anyway. Just go micro-USB at this point :)
Funnily enough, there is a 100% overkill way to solve such issues. Just use super expensive certified TB cables. Well... plus a A-to-C adapter for noncompliant devices, I guess.
This is just pretending that if you have a cat and a dog in two bags and you call it “a bag”, it’s one and the same thing…
If PNG gets extended, it's entirely plausible that someone will view a PNG in their browser, save it, and then not be able to open the file they just saved.
There are those who claim "backwards compatibility" doesn't cover "how you use it" - but roughly none of the people who now have to deal with broken software care about such semantic arguments. It used to work, and now it doesn't.
It's a dichotomy. Either the provider accommodates users with older software or not. The file extension or internal headers don't change that reality.
Another example, new versions of PDF can adopt all the bells and whistles in the world but I will still be saving anything intended to be long lived as 1/a which means I don't get to use any of those features.
USB-C spec is anything but breaking backward compatible.
Do they mention which C libraries use this spec?
What was broken was the promise of a "single cable to rule them all", partly due to manufacturers ignoring the requirements of USB-C (missing resistors or PD chips to negotiate voltages, requiring workarounds with A-to-C adapters), and a myriad of optional stuff, that might be supported or not, without a clear way to indicate it.
You don't follow spec, you're on your own.
The first bit of our research is "What can we already make use of which requires no spec update? There are plenty of PNG optimizers. How much of that should go into the typical PNG libraries?"
Same with parallel encoding & decoding. An older image viewer will be able to decode it on one thread without ever knowing parallel decoding was an option.
Here's the worry-a-little part: Everybody immediately jumps to file size as to what image compression is better or worse. That isn't the best take, but it is what it is. So there is pressure to adopt newer technologies.
We often do have a way to maintain some degree of backwards compatibility even when we do this. For example, we can store a downsampled image for old viewers. Then extra, new chunks will know "Mix that with this full scale data, using a different compression".
As you can imagine, this mixing complicates things. It might not be the best option. Sooooo we're researching it :)
I’m not saying this is what will happen — but if I was able to construct a plausible approach to compression in ten minutes, then perhaps it’s a bit early to predict the doom of compatibility.
Also if you forbid evolving existing formats, the only alternative to improve is to introduce a new format, and I argue that it would be causing even more fragmentation and be more difficult to adopt to. Look at all the drama surrounding JPEG XL.
> Many of the programs you use already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
It might be too late to rename png to .png4 or something. It sounds like we're using the new png standard already in a lot of our software.
Back then, there were no libraries in C# for it, but it's actually quite easy to make APNG from PNGs directly by writing chunks with correct headers, no encoders needed (assuming PNGs are already encoded as input).
https://github.com/NightElfik/Malsys/blob/master/src/Malsys....
While I welcome that there is now PNG with animations, I am less impressed about how Mozilla chose to push for it.
Using PNG's magic numbers and pretend to existing software that it is just normal PNG? That is the same mindset that lead to HTML becoming tag soup. After all, HTML with a <blink> tag is still HTML, no?
I think they could have achieved animated PNG standardization much faster with a more humble and careful approach.
> PNG is pronounced “ping”
See the end of Section 1 [0]
[1] https://edition.cnn.com/2013/05/22/tech/web/pronounce-gif
People believed me. Still funny.
Not sure I'll bother to reprogram myself from “png”, “pung”, or “pee-enn-gee”.
So why can’t you do that with GIF or PNG? People that create things get to name them.
And if they pick something dumb enough other people get to ignore them.
You'll commonly call someone by their pronounced name out of respect, forced or given.
In a situation where someone does something really stupid or annoying and the forced respect isn't there, most people don't.
On inanimate objects: Aluminium was first ratified by the IUPAC as aluminium⁰, with the agreement of its discoverer Sir Humphrey Davy¹, yet one huge nation calls it something else…
On people: nicknames are a thing, are you saying those are universally wrong? But yes, when a person tells me that they'd prefer their name pronounced a different way, or that they'd prefer a different name entirely, or that they don't like the nickname other use for them, you can bet your arse that I'll make the effort to use their preferred name+pronunciation in future.
------
[0] Though it should be noted that aluminum was, a few years after, officially accepted as an alternate form.
[1] He initially called it aluminum in the first paper.
But also, no, not universally even for babies, especially when the name is something ridiculous like X Æ A-Xii where even parents disagree on pronunciation, or when the person himself uses a "non-specced" variant
Hard-g is wrong, and those who use it are showing they have zero respect for others when they don't have to.
It's the tech equivalent to the shopping cart problem. What do you do when there is no incentive one way or the other? Do you do the right thing, or do you disrespect others?
Naming is probably one of the few language areas that I think should be prescriptive, even while language at large is descriptive.
A file format is not a sentient being. The creator's intent matters much more. If GIF had sentience and could voice a desire one way or the other, the whole discussion would be moot as it would clearly be disrespectful to intentionally mispronounce the name.
The G in gif is for graphics. Not 'giraffics'. And most people in the world have no idea what Jif even is, much less a particular catchphrase from an old ad campaign that barely even connects.
English has both pronunciations for "gi" based on origin. Giraffe, giant, ginger, etc from Latin; gift, give, (and presumably others) from Germanic roots.
Using the preferred one is just a matter of politeness.
Also, it's quite ironic to prescribe "linguistic prescriptivism" as wrong.
Wrt/ communication, aside from personal preference, one can either respect the creator, or the audience. If I stand in front of 10 colleagues, 10 out of them would not understand jif, or would only get it because this issue has some history now. gif on the other hand has no friction.
Ghengis Khan for example sounds very different from its original Mongolian pronunciation. And there is a myriad others as well.
I continue to pronounce it how I prefer it, not as a slight, but most people here would be surprised by the soft g.
If I ever meet him I’ll attempt to pronounce it soft-g.
On the other hand, even though my name exists and is reasonably common in English, I’m fairly certain neither you or the GIF creator would address me the way I pronounce my name. I would understand anyway, and wouldn’t care one bit.
The debate itself is old. "Since the 90s" Wikipedia says, and keep in mind the format was is from 1987 - so I would say the debate is on from the get-go. Appropriate, too, if you think back, arguing about this kind of stuff was pretty common. Emacs vs vim, browser wars, different kinds of computers, tribalism everywhere.
Thinking about it, I think I understand why hard G makes sense for people. With GPU, we pronounce the the individual letters, as it's clearly an abbreviation - as no sane English word starts with "gp". With GIF though, even though it's an abbreviation, it looks a lot like a normal word, "gift", and English also has "give", another one with a hard G, so it feels familiar to say. Moreover, the US, where GIF comes from, has Jif already established as a peanut butter brand, so it makes sense to not pronounce a newly invented, differently written word the same as an already established thing. Well, at least to some it makes sense!
https://file.org/extension/jif
https://fileinfo.com/extension/jiff
https://www.reddit.com/r/todayilearned/comments/4rirr8/til_t...
Surely they aren't releasing a new, incompatible version and expecting us to pretend it's the same format...?
> This updates the existing image/png Internet Media type
whyyyyyyy
We went to pretty extreme lengths to make sure old software worked with the new changes. Effectively, the limit will be the software, not the image.
For example, you can imagine some old software that is unaware of color spaces and treats everything as sRGB. There is nothing we can do to make that software show a Display P3 correctly. However, we can still show the image well enough that a user understands "that is a red apple".
Society doesn't need a new image format. I'd wager to say not any new multimedia format. Big corporate entites do, and have churning them out at a steady pace.
Look at poor webp - a format pushed by the largest industry players - and the abysmal everyday use it gets, and the hate it generates.
They say it's technically compatible since older image decoders should recognize the PNG file is using a different compression algorithm than the default.
> Many programs already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
This is intentionally ignoring the fact that there are countless PNG decoders out in the wild, many using libpng the standard decoder last updated 6 years ago; and they will not be able to read the new PNG v2 files.
They should have used a different file extension, PNG2, to distinguish this incompatible format. Otherwise, users will be confused why their newly saved PNG file cannot be read by certain existing programs.
There's a PR for APNG: https://github.com/pnggroup/libpng/pull/706 – it seems there was some work for HDR in e.g. https://github.com/pnggroup/libpng/pull/635 as well. Related: https://github.com/pnggroup/libpng/issues/507
https://www.libpng.org/pub/png/libpng.html
Looks like this is the proper location for the project.
To start, there's a byte with the upper bit set which ensures an "8-bit clean" transport. If it's stripped, it becomes a harmless tab. Then the literal "PNG" text so you can see it in a text editor. Then a CR-LF pair to check for CR-LF to LF translations. Then, a CTRL-Z to stop display on DOS-like systems. And finally, another LF to check for LF to CR-LF translations.
It's a clever "magic" that basically ensures a binary transport layer. Things that mattered back in 1996.
https://www.libpng.org/pub/png/spec/1.2/PNG-Rationale.html#R...
Estimates are that 95% of Internet users have a browser that supports WebP and that ~25% of the top million websites serve WebP images. I wouldn't call that abysmal.
My webcrawler sucks down a lot of WebP images, at least it did before it got the smackdown from Cloudflare.
Hell, for some software features (like stickers in some chat apps), WebP is mandatory.
HEIFF files, on the other hand...
One example is Sony's SRF camera raw format.
Programs like Photoshop and Affinity have to bring their own decoders where previously none were required.
Having ask that in a slightly confrontational way, one of the reasons I started using VLC all those years ago, and still use it to this day, was having trouble with other media players that relied on OS support fail to work well (or at all) with some codecs, while VLC brought support for them, and their dog, built-in and reliable. Dragging your own format support libraries with you can be beneficial.
There are so many uneven areas of Reddit where WebP doesn't work. Old reddit, profile support, mod tools, etc.
It doesn't matter if the alternative is technically superior once the majority use the mainstream thing.
Photoshop still won’t open it, MacOS preview opens it but then demands to convert it to tiff when you try to edit it
We used to do this with JPEG, in fact. And that's why many pictures on Facebook from pre-2018 or so all have a distinctive grainy look. It's artifacts on top of artifacts. Storage on phones isn't tight anymore, we don't need to store photos in a format meant to minimize bytes at the expense of quality.
Edit: and good luck uploading the format to the majority of webforms that aren’t faang.
Interns won't want to work on a dead end like this. Moreso they need to be supervised by someone that doesn't want to get removed by being the lowest X% usefulness in a company. So all these existing tools that aren't primary revenue generators just sit on coast mode.
Let’s also not forget the dependency mess that leaves in applications before we do though..
Better image formats serve entities who store images at scale, not end users.
In what other industry would it be considered acceptable to exclude 5% of visitors/users/clients?
See CSS image-set : https://developer.mozilla.org/en-US/docs/Web/CSS/image/image...
Rational, or economical? I find it rational to help someone in need since I'd want others to do the same to me, even if it's not financially profitable for me. Imo more factors flow into what's rational, but I understand what you mean by corporate greed working this way (less than 10% of people are blind, neither male nor female, run a free operating system or can't afford a new computer, etc., so yep they're not profitable groups and for-profits don't optimise for that)
If a corporation has determined that profit maximization is their core tenet, excluding the needs of a minority of users can likely be deduced in a rational manner from that tenet. That is precisely why values need to be forced onto corporate actors through regulation, e.g. in this case through mandatory accessibility guidelines like EU directive 2019/882 that enters into force this very week.
Or Linux users? Or even Firefox users in our market?
As for Linux users… I do recall they were even less than the 3%. Firefox users were more tho.
In any case, I’m almost sure most Linux users were fine. We just didn’t wanted to support old browsers.
That's not how it works.
The server declares what versions of media it has, and the client requests a supported media format. The same trick have been used for audio and video for ages too.
Example:
<picture>
<source srcset="a.webp" type="image/webp">
<img src="fallback.jpg">
</picture>
Images are often at different resolutions too, that way, depending on the pixel density of the device, and the physical size, the browser can select the photo that has high enough resolution, but not one that is needlessly large, while also selecting the preferred image format.
But even beyond that, most file formats have a bit of a header at the start of the file that declares the actual format of the file. Browsers already can understand that and use the correct render for a file without an extension.
Will say though that it's not universal, it depends heavily on the corner of the internet you're on.
Note that I'm looking at "all tracked," which excludes 2% "other" browsers in the data whose featureset is not known.
e.g. cars - not everyone is physically able to drive books - blind people can't read music - deaf people can't hear
It is a form of 80/20 or 90/10 rule the last small percentage costs as much as the majority.
(Also, the parent comment's example is also not so good because as someone else pointed just because the top 25% websites are serving webp it does mean they're not serving alternative formats for those who does not support it, as this is quite trivial to setup)
> Many […] programs […] already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
> Plus, you saw some broadcast companies in that list above. Behind the scenes, hardware and tooling are being updated to support the new PNG spec.
Whilst we're at it, please get rid of RGB and make it N channels too.
Libraries can choose to render that into a 3 channel, 8 bit buffer for legacy applications - but the data will be there for CMYK or HDR, or depth maps, or transparency, or focus stacking, or any other future feature!
We need good video formats however. Video makes up most of the global internet traffic, probably accounts for a good part of global storage capacity too. Even slightly better compression will have a massive impact.
We jumped through quite a lot of hoops to make sure old software will be able to display new images. They simply won't display them optimally. But for the most part, that would be because the old software wouldn't display images optimally anyway. So the limit was the software, not the format.
What I mean by this is old software that treats everything as sRGB wouldn't correctly show a Display P3 image anyway. But we made sure it will still display the image as correctly as it could.
What about it?
"Lossless WebP is typically 26% smaller than PNG, while lossy WebP can be 25-34% smaller than JPEG at equivalent quality levels"
This literally saves houndred of thousand of cost, bandwith, electricity every month on the internet. In fact, I strongly belive that this is one of the greatest contributions from Google to society just like ZSTD from Facebook.
"WebP is used by 16.7% of all websites. This means that while it's a popular image format, it's not yet the dominant format, with JPEG still holding the majority share at 73.0%, according to W3Techs. However, WebP offers significant advantages in terms of compression and file size, making it a preferred choice for many web developers. "
Therein lies the lie.
Image and video compression comparisons are like statistics with the right corpus and evaluation criteria you can should whatever narrative you want to push.
¯\_(ツ)_/¯
You'll never be able to faithfully represent an HDR image on a non-HDR system, but you'll still see an image.
But you can absolutely have an SDR image encoded using a large color space. So I am not sure why the author talks about color primaries when it tries to justify HDR… I still don’t know what kind of HDR images this new PNG variant can encode.