Top
Best
New

Posted by bluedel 6 days ago

A new PNG spec(www.programmax.net)
667 points | 595 comments
ProgramMax 5 days ago|
Author here. Hello everyone! Feel free to ask me anything. I'll go ahead and dispel some doubts I already see here:

- It isn't really a "new format". It's an update to the existing format. - It is very backwards compatible. -- Old programs will load new PNGs to the best of their capability. A user will still know "that is a picture of a red apple".

There also seems to be some confusion about how PNGs work internally. Short and sweet: - There are chunks of data. -- Chunks have a name, which says what data it contains. A program can skip a chunk it doesn't recognize. - There is only one image stream.

fwip 5 days ago||
Do you have any examples on hand of PNGs that use the new features of the spec? It would be cool to see a little demo page with animated or HDR images, especially to download to test if our programs support them yet.
ProgramMax 5 days ago||
Sure!

Chris Lilley--one of the original PNG co-authors--has a post with an example HDR image: https://svgees.us/blog/cICP.html It is about half way down, with the birthday cake. Generally, us tech nerds have phones that are capable of displaying it well. So perhaps view the page on your phone.

What you should look for is the cake, the pink tips in her hair, and the background being more vivid. For me, the pink in the cake was the big give-away.

There is also the Web Platform Tests (WPT) which we use to validate browser support: https://wpt.fyi/results/png/cicp-chunk.html?label=master&lab...

Although, that image is just a boring teal. See it live in your browser here: https://wpt.live/png/cicp-chunk.html

For an example of APNG, you can use Wikipedia's images: https://en.wikipedia.org/wiki/APNG

But you have a bigger point: I should have live demonstrations of those things to help people understand.

jacekm 5 days ago|||
Thank you for the examples. I tried the one with a pink cake. Turns out that on my machine only web browsers are capable of displaying the image properly. All viewers (IrfanView, XnView, Nomacs, Windows Photos) and editors (Paint .NET, GIMP) that I've tried only showed the "washed out" picture.
ProgramMax 5 days ago|||
Yeah. We were able to get buy-in from some big players. We cannot contact every group, though. My hope is since big players have bought in, others will hear the message and update their programs.

Sooooo file some bugs :D

Also, be kind to them. This literally launched yesterday.

dave8088 5 days ago||
The creator of photopea.com is very responsive to user suggestions. I’d recommend contacting him if you haven’t already.
sedatk 4 days ago||||
It's interesting that Paint.NET supports the vivid image if you screenshot the cake (Win+Shift+S) and paste it. But, opening the PNG opens up the washed out picture.
account42 4 days ago|||
Huh, for some reason GIMP doesn't even show the usual color space conversion dialog.
jcynix 4 days ago||||
> But you have a bigger point: I should have live demonstrations of those things to help people understand.

Pink can pose problems for individuals with red-green color blindness (or more exactly: color vision deficiency). So make sure that examples work for these people too. Otherwise the examples might not work for about 8% of your male viewers.

Nopoint2 5 days ago||||
I never realized how limited sRGB is. I guess this is why people liked CRT TVs, and why you could never watch analog TV properly on a PC screen.
account42 4 days ago||
It's really not that limited, the problem is only if you reinterpret a larger gamut as sRGB without doing the proper conversion where things look washed out.
Nopoint2 4 days ago||
That's what I thought too, but the difference is big. You'd think you maybe lose some color lights, or very bright flowers, but no, colors outside sRGB are common.

There was nothing you could do about the TV, the screen couldn't show all the colors that you needed.

cratermoon 4 days ago||||
I can see a clear difference between the images in Firefox on MacOS with my M1 macbook. Very nice.
fwip 5 days ago|||
Thanks, I appreciate all of these links. :)
dave8088 5 days ago|||
You’re awesome. Thanks for making things better.
account42 4 days ago|||
> It isn't really a "new format". It's an update to the existing format. - It is very backwards compatible. -- Old programs will load new PNGs to the best of their capability. A user will still know "that is a picture of a red apple".

This is great but also has the issue that users might not notice that their setup is giving them a less than optimal result. Of course that is probably still better than not having backwards compatibility.

Edit: Seems the backwards compatibility isn't as great as it could be. Old programs show a washed out image instead which sucks. This should have been avoidable in the same way JPG gain maps work so that you only need updated programs to take advantage of the increased gamut on more-than-sRGB screens and not to correctly show colors that fit into sRGB.

ProgramMax 4 days ago||
PNG Fourth Edition, which we are working on now, is likely to add gain maps.

However, gain maps are extra data. So there is a trade off.

The reason gain maps didn't make it into Third Edition is it isn't yet a formal standard. We have a bunch of the work ready to go once their standard launches.

Nanopolygon 4 days ago|||
If you are really going to do something new, I recommend that you proceed through a work that is very good at this. For example, HALIC(High Availability Lossless Image Compression). It is both extremely fast and has a very good compression ratio, and memory usage is very very low. There is also very strong Multithread support already. I think something like this would be great for the new PNG. Of course, we don't know what the author of HALIC thinks about this.
nabla9 4 days ago|||
Does it have any advantage over Lossless encoding in JPEG XL?
ProgramMax 4 days ago||
Yes, lots.

The big one is adoption. I love JPEG XL and hope it becomes more widely adopted. It is very scary to add a format to a browser because you can never remove it. Photoshop and MSPaint no longer support ICO files, but browsers do. So it makes sense for browsers to add support last, after it is clearly universal. I think JPEG XL is well on their way using this approach. But they aren't there yet and PNG is.

There is also longevity and staying power. I can grab an ancient version of Photoshop off eBay and it'll support PNG. This also benefits archivists.

As a quick side note on that: I want people to think about their storage and bandwidth. Have they ever hit storage/bandwidth limits? If so, were PNGs the cause? Was their site slow to load because of PNGs? I think we battle on file size as an old habit from the '90s image compression wars. Back then, we wanted pixels on the screen quickly. The slow image loads were noticeable on dial-up. So file size was actually important then. But today?? We're being penny-wise and pound-foolish.

LinAGKar 4 days ago|||
>we'll be researching compression updates for PNG Fifth Edition.

What sort of improvements might we expect? Is there a chance of it rivalling lossless WebP and JPEG XL?

ProgramMax 4 days ago||
Our first goal is to see what we can get for "free" right now. Most of the time, people save a PNG which is pretty far from optimally compressed. Then they compare that to another format which is closer to optimal and draw a poor comparison.

You can see this with PNG optimizers like OptiPNG and pngcrush.

So step 1 is to improve libpng. This requires no spec change.

Step 2 is to enable parallel encoding and decoding. We expect this to increase the file size, but aren't sure how much. It might be surprisingly small (a few hundred bytes?). It will be optional.

Step 3 is the major changes like zstd. This would prevent a new PNG from being viewable in old software, so there is a considerably higher bar for adoption. If we find step 1 got us within 1% of zstd, it might not be worth such a major change.

I don't yet know what results we'll find or if all the work will be done in time. So please don't take this as promises or something to expect. I'm just being open and honest about our intentions and goals.

Nanopolygon 4 days ago||
Solutions such as OptiPNG and Pngcrush require extra processing power on top of the already slow PNG. But in most cases they are still behind.
derefr 5 days ago|||
So, I'm a big fan of metaformats with generalized tooling support. Think of e.g. Office Open XML or ePub — you don't need "an OOXML parser" / "an ePub parser" to parse these; they're both just zipped XML, so you just need a zipfile library and libxml.

For the lifetime of PNG so far, a PNG file has almost, but just barely not, been a valid Interchange File Format (IFF) file.

IFF is a great (simple to understand, simple to implement support for, easy to generate, easy to decode, memory-efficient, IO-efficient, relatively compact, highly compressible) metaformat, that more people should be aware of.

However, up to this point, the usage of IFF has consisted of:

• some old proprietary game-data and image formats from the 1980s that no modern person has heard of

• some popular-yet-proprietary AV formats [AIFF, RIFF] that nobody would write a decoder for by hand anyway (because they would need a DSP library to handle the resulting sample-stream data anyway, and that library may as well offer container-format support too)

• The object files of an open but uncommon language runtime (Erlang .beam files), where that runtime exposes only high-level domain-specific parsing tooling (`beam_lib`) rather than IFF-general decoding tooling

• An "open-source but corporate-steered" image format that people are wary of allowing to gain ecosystem traction (WebP — which is more-specifically a document in a RIFF container)

• And PNG... but non-conformantly, such that any generic IFF decoder that could decode the other things above, would choke on a PNG file.

IMHO, this is a major reason that there is no such thing as "generalized IFF tooling" today, despite the IFF metaformat having all the attributes required to make it the "JSON of the binary world". (Don't tell me about CBOR; ain't nobody hand-rolling a CBOR encoder out of template strings.)

If you can't guess by now, my wishlist item for PNGv3, is for PNG files to somehow become valid/conformant IFF files — such that the popularity of PNG could then serve as the bootstrap for a real IFF tooling ecosystem, and encourage awareness/use of IFF in new greenfield format-definition use-cases.

---

Now, I've written PNG parsers, and generic IFF parsers too. I've even tried this exact unification trick before (I wanted an Erlang library that could parse both .beam files and PNG files. $10 if you can guess the use-case for that!)

Because of this, I know that "making PNG valid per IFF" isn't really possible by modifying the PNG format, while ensuring that the resulting format is decodable by existing PNG decoders. If you want all the old [esp. hardware] PNG parsers to be compatible with PNGv3s, then y'all can't exactly do anything in PNGv3 like "move the 4-byte CRC inside the chunk as measured by the 4-byte chunk length" or "make the CRCs into their own chunks that reference the preceding record".

But I'm not proposing that. I'm actually proposing the opposite.

Much of what PNGv2 did in contravention of the IFF spec, is honestly a pretty good idea in general. It's all stuff that could be "upstreamed" — from the PNG level, to the IFF level.

I propose: formalizing "the variant of IFF used in PNG" as its own separate metaformat specification — breaking this metaformat out from the PNG spec itself into its own standards document.

This would then be the "Interchange File Format specification, version 2.0" (not that there was ever a formal IFFv1 spec; we all just kind of looked at what EA/Commodore had done, and copied it in our own code since it was so braindead-easy to implement.)

This IFF 2.0 spec would formalize, at least, a version or "profile" of IFF for which PNGv2 images are conformant files. It would have chunk CRCs; chunk attribute bits encoded for purposes of decoders + editors via meaningful chunk-name letter-casing; and an allowance for some number of garbage bytes before the first valid chunk begins (for PNG's leading file signature that is not itself a valid IFF chunk.)

This could be as far as the IFF 2.0 spec goes — leaving IFFv1 files non-decodable in IFFv2 parsers. But that'd be a shame.

I would suggest going further — formalizing a second IFFv2 "profile" against which IFFv1 documents (like AIFF or RIFF files) are conformant; and then specifying that "generic" IFFv2-conformant decoders (i.e. a hypothetical "libiff", not a format-specific libpng) MUST implement support for decoding both the IFFv1-conforming and the PNGv2-conforming profiles of IFF.

It could then be up to the IFF-decoding-tooling user (CLI command user, library caller) to determine which IFFv2 "profile" to apply to a given document... or the IFFv2 spec could also specify some heuristic algorithm for input-document "profile" detection. (I think it'd be pretty easy; find a single chunk, and if what follows its chunk-length is a CRC that validates that chunk, then you have the PNGv2-like profile. Whereas if it's not that, but is instead four bytes of chunk-name-valid character ranges, then you've got the IFFv1-like profile. [And if it's neither, then you've got a file with a corrupted first chunk.])

---

And, if you want to go really far, you could then specify a third entirely-novel "profile", for use in greenfield IFF applications:

• A few bytes of space aren't so precious; we can hash things much faster these days, with hardware-accelerated hashing instructions; and those instructions are also for hashes that do much better than CRC to ensure integriaty. So either replace the inline CRCs with CRC chunks, or with nested FORM-like container records (WCRC [len] [CRC4] [interior chunk]). Or just skip per-chunk CRCs and formalize a fHsh chunk for document-level integrity, embedding the output of an arbitrary hash algorithm specified by its registered https://github.com/multiformats/multihash "hash function code".

• Re-widen the chunk-name-valid character set to those valid in IFFv1 documents, to ensure those can be losslessly re-encoded into this profile. To allow chunks with non-letter characters to have a valid attribute decoding, specify a document-level per-chunk-name "attributes of all chunks of this type" chunk, that can either be included into a given concrete format's header-chunk specification, or allowed at various points in the chunk stream per a concrete format's encoding rules (where it would then be expected to apply to any successor + successor-descendant chunks within its containing chunk's "scope.") Note that the goal here is to keep the attribute bits in some way or another — they're very useful IMHO, and I would expect an IFF decoder lib to always be emitting these boolean chunk-attribute fields as part of each decoded chunk.

• Formalize the magic signature at the beginning into a valid chunk, that somehow encodes 1. that this is an IFF 2.0 "greenfield profile" document (bytes 0-3); 2. what the concrete format in use is (bytes 4-7). (You could just copy/generalize what RIFF does here [where a RIFF chunk has the semantics of a LIST chunk but with a leading 4-byte chunk-name type], such that the whole document is enclosed by a root chunk — though this is painful in that you need to buffer the entire document if you're going to calculate the root-chunk length.)

I'm just spitballing; the concrete details of such a greenfield profile don't matter here, just the design goal — having a profile into which both IFFv1 and PNGv2 documents could be losslessly transcoded. Ideally with as minimal change to the "wider and weirder/more brittle ecosystem" side [in this case that's IFFv1] as possible. (Compare/contrast: how HTML5 documents are a profile of HTML that supersedes both HTML4 and XHTML1.1 — supporting both unclosed tags and XML-namespaced element names — allowing HTML4 documents to parse "as" HTML5 without rewrites, and XHTML1.1 documents to be transcoded to HTML5 by just stripping some root-level xmlns declarations and changing the doctype.)

ProgramMax 5 days ago|||
Strangely, I was familiar with AIFF and RIFF files but never made the connection that they're both IFF. I hadn't known about IFF before your post. Thank you :)

W3C requires that we do not break old, conformant specs. Meaning if the next PNG spec would invalidate prior specs, they won't approve it. By extension, an old, conformant program will not suddenly become non-conformant.

I could see a group of people formalizing IFFv2, and adapting PNG to it. But that would effectively be PNGIFF, not PNG. It would be a new spec. Because we cannot break the old one.

That might be fine. But it comes with a new set of problems, like adoption.

Soooo I like the idea but it would probably be a separate thing. FWIW, it would actually be nice to make a formal IFF spec. If there was no governing body that owns it, we can find an org and gather interest.

I doubt W3C would be the right org for it. ISO subgroup??

saintfire 5 days ago||
They pretty much say the same thing halfway through. Don't change PNG but adapt IFF to work with PNG's flavour of IFF.
ProgramMax 5 days ago||
Right. Sorry, that was supposed to be a "yes, and..." to provide some additional context.
account42 4 days ago||||
We really shouldn't be making new standards with big endian byte order.

It's also questionable how much you actually benefit from common container formats like this since you need to know the application specific format contained anyway in order to do anything useful with it. It also causes problems where "smart" programs treat files in ways that make no sense, e.g. by offering to extract a .docx file just because it looks like a .zip

derefr 4 days ago|||
> you need to know the application specific format contained anyway in order to do anything useful with it

One neat thing about IFF is that all of its "container" chunk types (LIST, FORM, CAT) are part of the standard; the expectation is that domain-specific chunk types should [mostly] be leaf nodes. As such, IFF is at least "legible" in the same way that XML or JSON or Lisp is legible (and more than e.g. ELF is legible): you're meant to decompose an object graph into individual IFF chunks for each object in the graph. Which translates to IFF files being "browseable", rather than dead-ending in opaque tables that require some other standard to tell how how they're even row-delimited.

Another neat thing is that, like with namespaced XML element names, chunk names — at least the "public" ones — are meant to have globally-unique meanings, being registered in a global registry (https://wiki.amigaos.net/wiki/IFF_FORM_and_Chunk_Registry). This means that IFF tooling can "browse" an arbitrary unknown IFF document, find a chunk it does understand the meaning of, and usefully decode it (and maybe its descendants) for you.

Many more-complex IFF formats (e.g. the AV containers like RIFF) embed data of other media types as chunks of these registered types. Think "thumbnail in a video file" or "texture in a scene file." Your tooling doesn't need to know the semantics of the outer format, to be able to discover these registered inner chunks inside it, and browse/preview/extract them. (Or replace them one-for-one with another asset of the same type; or even, if they're inside a simple LIST chunk, add or remove instances of the asset from the list!)

Also, somewhat interestingly, given the way IFF is structured, there is no inherent difference between embedding a sub-resource "opaquely" vs embedding it "legibly" — i.e. if you embed a [headerless] IFF document as the value of a chunk in another IFF document, then that's exactly the same thing as nesting the root-level chunk(s) of that sub-document within the parent chunk. It's like how an SVG sub-document inside an XHTML document isn't a separate serialized blob that gets sucked out and parsed, but rather just additional tags in the XHTML document-string, around which a boundary of "this is a separate XML sub-document" gets drawn by some "DOM document builder" code downstream of the actual XML parser.

---

But besides the technical "it can be done" points, let me also speak more in terms of the motivation. Why would you want to?

Well, have you ever wanted to open up a complex file and pull its atomic-level assets out? Your first thought when hearing that was probably "that sounds like a nightmare" — and yes, today, it is.

But back in the 1980s, with the original growth of IFF-based formats, we temporarily lived in this wonderland where there were all these different browseable / explorable file formats, that could be cracked open with exactly the same tools.

Do you wonder how and why the game modding scene first came into existence? It was basically the result of games storing their asset packs in these simple-to-parse/generate file formats — where people could easily drop-in replace one of those assets with a new one with simple command-line tools, or even with a GUI, without worrying about matching asset sizes / binary offset patching / etc — let alone with any knowledge of how the container file format works.

Do you appreciate how macOS app bundles just have a browseable, hierarchical Resources directory inside them? Before app bundles, macOS applications held their resources in a "resource fork" — essentially a set of FourCC-tagged file extended-attributes (though actually, a single on-disk packfile that acted as a random-access key-value store of those xattrs). And both of these approaches (bundle Resources dirs, and resource forks) provided the same explorability / moddability as IFF files do. People would throw a macOS program into ResEdit and pull out its icons, its fonts, its strings, whatever — where those weren't program-domain-specific things, but rather were effectively items with standardized media types (their FourCC codes being effectively the predecessor of modern MIME types.)

For that matter, consider this quote from the IFF wiki page:

> There are standard chunks that could be present in any IFF file, such as AUTH (containing text with information about author of the file), ANNO (containing text with annotation, usually name of the program that created the file), NAME (containing text with name of the work in the file), VERS (containing file version), (c) (containing text with copyright information).

Now, remember that IFF decoders are almost always expected / coded to ignore chunks they don't understand. (Especially for IFF files encoded as a toplevel stream of heterogeneous chunk types.)

That means that not only can various format authors decide to use these standard chunks... but third-party editors can also just drop chunks like this into the things they edit! You know how Windows has that "name, author, version" etc info on the Properties sheet for some file types? That info could show up and be editable for any IFF-based file format — whether the particular format has an "allowance" for it or not.

(There's nothing special about IFF here, by the way. You could just as well drop "foreign-namespaced attributes" like this into an e.g. XML-based document format. The difference is a cultural one: the developers of XML-based document formats tend to have their XML decoders validate their documents for strict conformance to an XML schema; and XML schemas tend to be [but don't have to be!] designed as whitelists of the possible tags that can be used within any given parent nesting path. IFF, meanwhile, has never had anything like a schema-based document validation. Every document was best-effort parsed, like HTML4; and so every IFF-based format decoder is a best-effort decoder, like a web browser parsing HTML4. That very lack of schema-based validation, actually opens up a lot of use-cases for IFF.)

derefr 4 days ago|||
(Separate reply for space)

> We really shouldn't be making new standards with big endian byte order.

IFF isn't a wire protocol standard for efficient zero-copy; and nor is it intended for file formats amenable to being streaming-parsed.

And that's okay! Not every format needs to be suited to efficient, scalable, concurrent, [other lovely words] message passing!

IFF has two major use-cases:

1. documents that are "loaded" in some program, where "loading" is expected to occur against a random-access block device; where each chunk will be visited in turn, with either its contents being parsed into an in-memory representation; its contents' slicing bounds being stored to later stream or random-access within (or the part of the file within those bounds being mmap(2)ed — same thing); or that chunk discarded, thus allowing the load operation to skip issuing any read ops for it or its descendants entirely.

This is the PNG use-case.

(Though, interestingly enough, since PNG has only one large chunk — the image data — PNG can be made into an "effectively-streamed format" simply by keeping that big chunk at the end of the IFF document. Presuming the stream length of the PNG file is known [as in a regular HTTP fetch], the "skeleton load" process for PNG can terminate after just having parsed its way through all the other tiny chunks — perhaps with a few minimal buffer waits to skip over unknown chunks — but with no need to buffer the entire image data chunk. [It adds the image-data-chunk length to the file pointer, realizes there's no more room for chunks in the stream, and so doesn't bother to buffer+seek past that final chunk.] The IFF parser then returns to the caller, passing it the slicing bounds of [among other things] the (still not-yet-fully-received) image-data chunk. And the caller can then turn around, and hand the same FILE pointer and those slicing bounds to its streaming renderer, letting it go to town consuming the stream as needed.)

IFF in its skeleton-loading model, would also be ideal for something like e.g. a font file (which has lots of little tables, which are either eagerly parsed, or ignored, by any given renderer.)

2. simple "read-rarely" packfile documents, that act sort of like little databases, but without any sort of TOC header part; where, when you want to grab something from the packfile, you re-navigate down through it from the root, taking the IOPS hit from all the seeks to each nesting-parent chunk's preceeding sibling chunks before hitting the descendant you want to navigate into.

This is the use-case of most IFFv1 file formats — most of them were made for use by programs that would grab this or that for the program's use either once at startup, or when the thing became relevant. (Think of the types of things a Windows executable embeds as "resources" — icons, translated strings, XAML declarative-MVC-view documents, etc.)

For a parallel, IFF here is to "using an entire archive-format library like tar or zip to store these assets for random access", as "spitting CSV/XML out using template strings" is to using a library to encode a table to a Parquet/ORC/etc. table.

The parallel is that in both cases, you're trading some performance and robustness, for massively reduced complexity and ease of implementation. Like with emitting CSV, you can slop together an IFF encoder right there inside your data-emitting logic — in any language that can write out binary files, and without even having access to the Internet, let alone adding a dependency on an encoder package in some package ecosystem. You can do it in C; you can do it in assembly; you can do it in a bash script; you can do it in BASIC; you can do it in a Windows batch file; you can do it in your single-file Python or Ruby or Perl script that lives in your repo. You can probably do it in a Makefile!

(Also, given how IFF parsing works [i.e. given that any given chunk's contents is in superposition of being either an opaque binary slice or a potential stream of child chunks, with a streaming event-based parser able to decide at each juncture whether to take that step of decoding the child chunks or to leave them as an undecoded binary for now], if you start to care about performance, you can just stick some memoization in front of your "fetch a key-path-lens KP from document D" function, and now you're building a just-in-time TOC. And obviously you can put TOC chunks in your IFF-based file formats if you want — though IMHO doing so kind of goes against the spirit of IFF.)

---

In neither of those use-cases does it really matter that lengths require reading four bytes one-at-a-time with left-shifts, rather than being able to just plop the four bytes into a register. These aren't cases where the parse overhead of the the structural glue between the data, will ever be non-trivial relative to the time it takes to consume the data itself.

And even if you did want to use IFF for something crazy, like as a substitute for Protobuf: did you know that most modern CPU ISAs have a byte-shuffle instruction that can transform big-endian into little-endian [among an unbounded number of other potential transformations] in a single cycle? Endian-ness did matter in protocol design for a while... but these days, unless you're e.g. a Google engineer designing a new SAN protocol, and optimizing it for message-handling overhead on your custom SDN L7 network-switch silicon that doesn't have a shuffle op... endian-ness is mostly irrelevant again!

kmeisthax 5 days ago|||
[dead]
80x86 5 days ago||
It would be nice if PNG supported no compression. That is handy in many situations.
joshmarinacci 5 days ago||
A fun trick I do with my web based drawing tools is to save a JSON representation of your document as a comment field inside of a PNG. This way the doc you save is immediately usable as an image but can also be loaded back into the editor. Also means your downloads folder isn’t littered with unintelligible JSON files.
dtech 5 days ago||
A fun trick, but I wouldn't want to explain to users why their things are saved as a .png, not why their things is lost after they opened and saved the PNG in Paint.
account42 4 days ago|||
It can also become a security issue when users inadvertendly share layers/history/whatever that isn't visible anymore in the final image but still in the editable part.
KetoManx64 5 days ago|||
If a user is using paint to edit their photos, they're 100% not going to be interested in having the source document to play around with.
shiryel 5 days ago|||
That is also how Krita stores brushes. Unfortunately, that can cause some unexpected issues when there's too much data [1][2].

[1] - https://github.com/Draneria/Metallics-by-Draneria_Krita-Brus...

[2] - https://krita-artists.org/t/memileo-impasto-brushes/92952/11...

oakwhiz 5 days ago||
If a patch is needed for libpng to get around the issue, maybe Krita should vendor libpng for usability. It's not unreasonable for people to want to create gigantic files like this.
speps 5 days ago|||
Macromedia Fireworks did it 20 years ago, the PNG was the default save format. Of course, it wasn’t JSON stored in there…
usef- 5 days ago||
I was going to say the same thing. It was nice as their native save format could still be opened anywhere.

But you did need to remember to export if you didn't want the extra fields increasing the file size. I remember finding fireworks pngs on web pages many times back then.

IvanK_net 5 days ago|||
Macromedia did this when saving Fireworks files into PNG.

Also, Adobe saves AI files into a PDF (every AI file is a PDF file), and Photoshop can save PSD files into TIFF files (people wonder why these TIFFs have several layers in Photoshop, but just one layer in all other software).

giancarlostoro 5 days ago||
> Macromedia did this when saving Fireworks files into PNG. I forgot about this..

Fireworks was my favorite image editor, I don't know that I've ever found one I love as much as I loved Fireworks. I'm not a graphics guy, but Fireworks was just fantastic.

IvanK_net 5 days ago||
BTW. I am the author of https://www.photopea.com , which is the only software that can open Fireworks files today :D If you have any files, try to open theim (it runs instantly in your browser).

https://community.adobe.com/t5/fireworks-discussions/open-fi...

eigenvalue 4 days ago|||
You’re doing god’s work here, thanks for your service! I use photopea all the time. Probably the most impressive web app I’ve seen in terms of performance.
speps 5 days ago||||
Do you have any info on the format used in the PNG chunks? I’d love for someone to recreate Fireworks, it was perfectly adapted to a lot of workflows.
Andrex 4 days ago|||
Proud paid Photopea user here. I can't understand how you guys overcame my mountain of incredulity but you have saved my ass so much. I was literally looking into dual booting before I found your product.

(Not many things handle .ai so well either!!)

neuronexmachina 5 days ago|||
This would be great for things like exported Mermaid diagrams.
tomtom1337 5 days ago|||
Could you expand on this? It sounds a bit preposterous to save a text, as json, inside an image - and then expect it to be immediately usable… as an image?
bitpush 5 days ago|||
Not OP, but PNG (and most image/video formats) allows metadata and most allows arbitrary fields. Good parsers know to ignore/safely skip over fields that they are not familiar with.

https://dev.exiv2.org/projects/exiv2/wiki/The_Metadata_in_PN...

This is similar to HTTP request headers, if you're familiar with that. There are a set of standard headers (User-Agent, ETag etc) but nobody is stopping you from inventing x-tomtom and sending that along with HTTP request. And on the receiving end, you can parse and make use of it. Same thing with PNG here.

LeifCarrotson 5 days ago||||
They're not saving text, they're saving an idea - a "map" or a "CAD model" or a "video game skin" or whatever.

Yes, a hypothetical user's sprinker layout "map" or whatever they're working on is actually composed of a few rectangles that represent their house, and a spline representing the garden border, and a circle representing the tree in the front yard, and a bunch of line segments that draw the pipes between the sprinkler heads. Yes, each of those geometric elements can be concisely defined by JSON text that defines the X and Y location, the length/width/diameter/spline coordinates or whatever, the color, etc. of the objects on the map. And yes, OP has a rendering engine that can turn that JSON back into an image.

But when the user thinks about the map, they want to think about the image. If a landscaping customer is viewing a dashboard of all their open projects, OP doesn't want to have to run the rendering engine a dozen times to re-draw the projects each time the page loads just to show a bunch of icons on the screen. They just want to load a bunch of PNGs. You could store two objects on disk/in the database, one being the icon and another being the JSON, but why store two things when you could store one?

chown 5 days ago||||
Save text as JSON as comments but the file itself is a PNG so that you can use it as an image (like previewing it) as they would ignore the comments. However, the OP’s editor can load the file back, parse the comments, and get the original data and continue to edit. Just one file to maintain. Quite clever actually.
woodrowbarlow 5 days ago||||
this is useful for code that renders images (e.g. data-visualization tools). the image is the primary artifact of interest, but maybe it was generated from data represented in JSON format. by embedding the source data (invisibly) in the image, you can extract it later to modify and re-generate.
behnamoh 5 days ago||||
no, GP meant they add the JSON text to the meta data of the image as comment.
meindnoch 5 days ago|||
Check what draw.io does when you download a PNG.
japanuspus 4 days ago|||
This is such a great use. Excalidraw does this too [0], and uses a two-level extension, `.excalidraw.png`.

[0]: https://excalidraw.com/

geekifier 5 days ago|||
This is also how Valetudo delivers robot map data to Home Assistant https://hass.valetudo.cloud.
akx 5 days ago|||
This is what stable-diffusion-webui does too (though the format is unfortunately plaintext); ComfyUI stores the node graph as JSON, etc.
paisawalla 5 days ago|||
Are you the developer of draw.io?
dragonwriter 5 days ago|||
This is also what many AI image gen frontends do, saving the generation specs as comments so you can open the image and get prompt and settings (or, for, e.g., ComfyUI, full workflows) loaded to tweak.

Really, I think its pretty common for tools that work with images generally.

osetnik 5 days ago||
> save a JSON representation of your document as a comment field inside of a PNG

Can you compress it? I mean, theoretically there is this 'zTXt' chunk, but it never worked for me, therefore I'm asking.

ksec 5 days ago||
It is just a spec on something widely implemented already.

Assuming Next gen PNG will still require new decoder. They could just call it PNG2.

JPEG-XL already provides everything most people asked for a lossless codec. If there are any problems it is its encoding and decoding speed and resources.

Current champion of Lossless image codec is HALIC. https://news.ycombinator.com/item?id=38990568

thesz 5 days ago||
HALIC discussion page [1] says otherwise.

[1] https://encode.su/threads/4025-HALIC-(High-Availability-Loss...

It looks like LEA 0.5 is the champion.

And HALIC is not even close to ten in this [2] lossless image compression benchmark.

[2] https://github.com/WangXuan95/Image-Compression-Benchmark

poly2it 5 days ago|||
It looks like HALIC offers very impressive decode speeds within its compression range.
ksec 5 days ago||
And not just decoding speed but also encoding speed with difference of an order of magnitude. Some new results further down in the comments in this thread. Had it not been verified I would have thought it was a scam.
Nanopolygon 4 days ago|||
Champion! I wish you hadn't commented so ignorantly. Please also try to look at the processing speeds! Or you can also try not to comment at all...
Aloisius 5 days ago|||
I'll be honest, I ignored JPEG XL for a couple years because I assumed that it was merely for extra large images.
voxleone 5 days ago|||
I'm using png in a computer vision image annotation tool[0]. The idea is to store the class labels directly in the image [dispensing with the side car text files], taking advantage of the beautiful png metadata capabilities. The next step is to build a specialized extension of the format for this kind of task.

[0]https://github.com/VoxleOne/XLabel

illiac786 5 days ago|||
> If there are any problems it is its encoding and decoding speed and resources.

And this will improve over time, like jpg encoders and decoders did.

ksec 5 days ago|||
I hope I am very wrong but this isn't given. In the past reference encoder and decoder do not concern about speed and resources, but last 10 years have shown most reference encoder and decoder has already put considerable effort into speed optimisation. And it seems people are already looking to hardware JPEG XL implementation. ( I hope and guess this is for Lossless only )
illiac786 5 days ago||
I would agree we will see less improvements that when comparing modern jpeg implementation and the reference one.

When it comes to hardware encoding/decoding, I am not following your point I think. The fact that some are already looking at hardware implementation for JPEG XL means that….?

I just know JPEG hardware acceleration is quite common, hence I am trying to understand how that makes JPEG XL different/better/worse?

ksec 5 days ago||
In terms of PC usage, JPEG, or most image codec decoding are done via software and not hardware. AFAIK even AVIF decoding is done via software on browser.

Hardware acceleration for lossless makes more sense for JPEG XL because it is currently very slow. As the author of HALIC posted some results below, JPEG XL is about 20 - 50x slower while requiring lots of memory after memory optimisation. And about 10 - 20 times slower compared to other lossless codec. JPEG XL is already used by Camera and stored as DNG, but encoding resources is limiting its reach. Hence hardware encoder would be great.

For lossy JPEG XL, not so much. Just like video codec, hardware encoder tends to focus on speed and it takes multiple iteration or 5 - 10 years before it catches up on quality. JPEG XL is relatively new with so many tools and usage optimisation which even current software encoder is far from reaching the codec's potential. And I dont want crappy quality JPEG XL hardware encoder, hence I much prefer an upgradeable software encoder for JPEG XL lossy and hardware encoder for JPEG XL Lossless.

spider-mario 4 days ago||
Lossless JPEG XL encoding is already fast in software and scales very well with the number of cores. With a few cores, it can easily compress 100 megapixels per second or more. (The times you see in the comment with the DPReview samples are single-threaded and for a total of about 400 MP since each image is 101.8MP.)
Nanopolygon 4 days ago||
HALIC does almost the same degree of compression tens of times faster. And interestingly, it consumes almost no memory at all. Unfortunately, this is the case.
spider-mario 4 days ago||
HALIC being faster still doesn’t mean that lossless JXL is so slow as to warrant hardware acceleration.
Nanopolygon 4 days ago||
Yes, I also think that HALIC should be destroyed ;)
account42 4 days ago|||
Or it won't like JPEG 2000 encoders didn't.
illiac786 4 days ago||
I mean, if jxl becomes mainstream, of course.
bla3 5 days ago|||
WebP lossless is close to state of the art and widely available. It's also not widely used. The takeaway seems to be that absolute best performance for lossless compression isn't that important, or at least it won't get you widely adopted.
ProgramMax 5 days ago|||
WebP maxes at 8-bit per channel. For HDR, you really need 10- or 12-bit.

WebP is amazing. But if I were going to label something "state of the art" I would go with JPEGXL :)

mchusma 5 days ago||||
I don't know that i have ever used jpg or png lossless in practical usage (e.g. I don't think 99.9% of mobile app or web usecases are for lossless). WebP lossy performance is just not worth it in practice, which is why WebP never took off IMO.

Are there usecases for lossless other than archival?

kbolino 4 days ago|||
I definitely noticed when the Play Store switched to lossy icons. I can still notice it to this day, though they did at least make it harder to notice (it was especially apparent on low-DPI displays). Fortunately, the apps once installed still seem to use lossless icons.

A lot of images should be lossless. Icons/pictograms/emoji, diagrams and line drawings (when rasterized), screenshots, etc. You can sometimes get away with large-resolution lossy for some of these if you scale it down, but that doesn't necessarily translate into a smaller file size than a lossless image at the intended resolution.

There's another problem with lossy images, which is re-encoding. Any app/site that lets you upload/share an image but also insists on re-encoding it can quickly turn it into pixelated mush.

Inityx 5 days ago|||
Asset pipelines for media creation benefit greatly from better compression of lossless images and video
adzm 5 days ago||||
Only downside is that webp lossless requires RGB colorspace so you can't, for example, save direct YUV frames from a video losslessly. AVIF lossless does support this though.
account42 4 days ago|||
Last I checked cwebp does not preserve PNG color space information properly so the result isn't actually visually lossless.
ChrisMarshallNY 5 days ago|||
Looks like it's basically reaffirming what a lot of folks have been doing, unofficially.

For myself, I use PNG only for computer-generated still images. I tend to use good ol' JPEG for photos.

yyyk 5 days ago|||
When it comes to metadata, an implementation not being widely implemented (yet) is not that big a problem. Select tools will do for meta, so this is an advancement for PNG.
klabb3 5 days ago|||
What about transparency? That’s the main benefit of PNG imo.
cmiller1 5 days ago||
Yes JPEG-XL has an alpha channel.
HakanAbbas 5 days ago||
I don't really understand what the new PNG does better. Elements such as speed or compression ratio are not mentioned. Thanks also for your kind thoughts ksec.

Apart from the widespread support in codecs, there are 3 important elements: processing speed, compression ratio and memory usage. These are taken into account when making a decision (pareto limit). In other words, the fastest or the best compression maker alone does not matter. Otherwise, the situation can be interpreted as insufficient knowledge and experience about the subject.

HALIC is very good in lossless image compression in terms of speed/compression ratio. It also uses a comic amount of memory. No one mentioned whether this was necessary or not. However, low memory usage negatively affects both the processing speed and the compression ratio. You can see the real performance of HALIC only on large-sized(20 MPixel+) images(single and multi-thread). An example current test is below. During operations, HALIC uses only about 20 MB of memory, while JXL uses more than 1 GB of memory.

https://www.dpreview.com/sample-galleries/6970112006/fujifil...

June 2025, i7 3770k, Single Thread Results

----------------------------------------------------

First 4 JPG Images to PPM, Total 1,100,337,479 bytes

HALIC NORMAL: 5.143s 6.398s 369,448,062 bytes

HALIC FAST : 3.481s 5.468s 381,993,631 bytes

JXL 0.11.1 -e1: 17.809s 28.893s 414,659,797 bytes

JXL 0.11.1 -e2: 39.732s 26.195s 369,642,206 bytes

JXL 0.11.1 -e3: 81.869s 72.354s 371,984,220 bytes

JXL 0.11.1 -e4: 261.237s 80.128s 357,693,875 bytes

----------------------------------------------------

First 4 RAW Images to PPM, Total 1.224.789.960 bytes

HALIC NORMAL: 5.872s 7.304s 400,942,108 bytes

HALIC FAST : 3.842s 6.149s 414,113,254 bytes

JXL 0.11.1 -e1: 19.736s 32.411s 457,193,750 bytes

JXL 0.11.1 -e2: 42.845s 29.807s 413,731,858 bytes

JXL 0.11.1 -e3: 87.759s 81.152s 402,224,531 bytes

JXL 0.11.1 -e4: 259.400s 83.041s 396,079,448 bytes

----------------------------------------------------

I had a very busy time with HALAC. Now I've given him a break, too. Maybe I can go back to HALIC, which I left unfinished, and do better. That is, more intense and/or faster. Or I can make it work much better in synthetic images. I can also add a mode that is near-lossless. But I don't know if it's worth the time I'm going to spend on it.

account42 4 days ago||
> In other words, the fastest or the best compression maker alone does not matter.

Strictly true, but e.g. for archival or content delivered to many users compression speed and memory needed for compression is an afterthought compared to compressed size.

HakanAbbas 4 days ago||
Storage is cheaper than it used to be. Bandwidth is also cheaper than it used to be (though not as cheap as storage). So high quality lossy techniques and lossless techniques can be adopted more than low quality lossy compression techniques. Today, processor cores are not getting much faster. And energy is still not cheap. So in all my work, processing speed (energy consumption) is a much higher priority for me.
boogerlad 4 days ago||
You're right, but aren't you forgetting that for each image, the encode cost needs to be paid just once, but the decode time must be paid many many times? Therefore, I think it's important to optimize size and decode time.
HakanAbbas 4 days ago||
HALIC's decode speed is already much faster compared to other codecs. When you look at the compression ratios, they are almost the same. There doesn't seem to be a problem with this. There are also issues where encode speed is especially important. But I think there is no need to spend a lot more energy to make a few percent more compression and decode it.
qwertox 6 days ago||
> Officially supports Exif data

Probably the best news here. While you already can write custom data into a header, having Exif is good.

BTW: Does Exif have a magnetometer (rotation) and acceleration (gravity) field? I often wonder about why Google isn't saving this information in the images which the camera app saves. It could help so much with post-processing, like with leveling the horizon or creating panoramas.

Aardwolf 6 days ago||
Exif can also cause confusion for how to render the image: should its rotation be applied or not?

Old decoders and new decoders now could render an image with exif rotation differently since it's an optional chunk that can be ignored, and even for new decoders, the spec lists no decoder recommendations for how to use the exif rotation

It does say "It is recommended that unless a decoder has independent knowledge of the validity of the Exif data, the data should be considered to be of historical value only.", so hopefully the rotation will not be used by renderers, but it's only a vague recommendation, there's no strict "don't rotate the image" which would be the only backwards compatible way

With jpeg's exif, there have also been bugs with the rotation being applied twice, e.g. desktop environment and underlying library both doing it independently

DidYaWipe 5 days ago||
The stupid thing is that any device with an orientation sensor is still writing images the wrong way and then setting a flag, expecting every viewing application to rotate the image.

The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.

ralferoo 5 days ago|||
One interesting thing about JPEG is that you can rotate an image with no quality loss. You don't need to convert each 8x8 square to pixels, rotate and convert back, instead you can transform them in the encoded form. So, rotating each 8x8 square is easy, and then rotating the image is just re-ordering the rotated squares.
pwdisswordfishz 5 days ago|||
That doesn't seem to apply to images that aren't multiples of 8 in size, does it?
justincormack 5 days ago|||
the stored image is always a multiple of 8, with padding that is ignored (and heavily compressed).
pwdisswordfishz 5 days ago||
But can this lossless rotation process account for padding not being in the usual place (lower right corner presumably)?
mort96 5 days ago||
I'm not sure if this is how JPEG implements it, but in H.264, you just have metadata which specifies a crop (since H.264 also encodes in blocks). From some quick Googling, it seems like JPEG also has EXIF data for cropping, so if that's the mechanism that's used to crop off the bottom and right portions today, there's no reason it couldn't also be used to crop off the top and left portions when losslessly rotating an image's blocks.
hidroto 5 days ago|||
are there any cameras that take pictures that are not a multiple of 8 in width and height?
bdavbdav 5 days ago||
People may crop
DidYaWipe 5 days ago||||
Indeed. Whenever I'm using an image browser/manager application that supports rotating images, I wonder if it's doing JPEG rotation properly (as you describe) or just flipping the dumb flag.
account42 4 days ago||
Or lossy re-encoding.
DidYaWipe 4 days ago||
Yes, worst of all.
meindnoch 5 days ago||||
Only if the image width/height is a multiple of 8. See: the manpage of jpegtran, especially the -p flag.
dylan604 5 days ago|||
Slight nitpicking, but you can rotate in 90° increments without loss.
klabb3 5 days ago||||
TIL, and hard agree (on face value). I’ve been struck by this with arbitrary rotation of images depending on application, very annoying.

What are the arguments for this? It would seem easier for everyone to rotate and then store exif for the original rotation if necessary.

kllrnohj 5 days ago||
> What are the arguments for this? It would seem easier for everyone to rotate and then store exif for the original rotation if necessary.

Performance. Rotation during rendering is often free, whereas the camera would need an intermediate buffer + copy if it's unable to change the way it samples from the sensor itself.

DidYaWipe 5 days ago|||
Given that rotation sensors have been standard equipment on most cameras (AKA phones) for many years now, I would expect pixel-reordering to be built into supporting ASICs and to impose negligible performance penalties.
airstrike 5 days ago|||
How is rotation during rendering free?
kllrnohj 5 days ago|||
For anything GPU-rendered, applying a rotation matrix to a texture sample and/or frame-buffer write is trivially cheap (see also why Vulkan prerotation exists on Android). Even ignoring GPU-rendering, you always are doing a copy as part of rendering and often have some sort of matrix operation anyway at which point concatenating a rotation matrix often doesn't change much of anything.
account42 4 days ago||
The cost is paid in different memory access patterns which may or may not be mitigated by the GPU scheduler. It's an insignificant cost either way though, both for the encoder and the renderer. Also depending on the pixel order in sensor, file or frame buffer "rotated" might actually be the native way and the default is where things get flipped around from source to destination.
kllrnohj 4 days ago||
Access pattern is mitigated by texture swizzling which will happen regardless of how it's ultimately rendered. So even if drawn with an identity matrix you're still "paying" for it regardless just due to the underlying texture layout. GPUs can sample from linear textures, but often it comes with a significant performance penalty unless you stay on a specific, and undefined, path.
chainingsolid 5 days ago|||
Pretty much every pixel rendered these days was generated by a shader so gpu side you probably already have way more translation options than just a 90° rotation (likely already being used for a rotation of 0°). You'd likely have to write more code cpu side to handle the case of tell the gpu to rotate this please and handle the UI layout diffrence. Honestly not a lot of code.
Someone 5 days ago||||
> The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.

The hardware likely is optimized for the common case, so I would think that can be a lot slower. It wouldn’t surprise me, for example, if there are image sensors out there that can only be read out in top to bottom, left to right order.

Also, with RAW images and sensors that aren’t rectangular grids, I think that would complicate RAW images parsing. Code for that could have to support up to four different formats, depending on how the sensor is designed,

DidYaWipe 5 days ago|||
At this point I expect any camera ASICs to be able to incorporate this logic for plenty-fast processing. Or to do it when writing out the image file, after acquiring it to a buffer.

Your raw-image idea is interesting. I'm curious as to how photosites' arrangement would play into this.

account42 4 days ago|||
Sensors are not read out as JPEG but into intermediate memory. The encoding step can then deal with the needed rotation.

RAW images aren't JPEGs so not relevant to the discussion.

mavhc 5 days ago|||
Because your non-smartphone camera doesn't have enough ram/speed to do that I assume (when in burst mode)

If a smartphone camera is doing it, then bad camera app!

Aardwolf 5 days ago|||
Rotation for speed/efficiency/compression reasons (indeed with PNG's horizontal line filters it can have a compression reason too) should have been a flag part of the compressed image data format and for use by the encoder/decoder only (which does have caveats for renderers to handle partial decoding though... but the point is to have the behavior rigorously specified and encoded in the image format itself and handled by exactly one known place namely the decoder), not part of metadata

It's basically a shame that the exif metadata contains things that affect the rendering

Joel_Mckay 5 days ago||||
Most modern camera modules have built in hardware codecs like mjpeg, region of interest selection, and frame mirror/flip options.

This is particularly important on smartphones and battery operated devices. However, most smartphone devices simply save the photo the same way regardless of orientation, and simply add a display-rotated flag to the metadata.

It can be super annoying sometimes, as one can't really disable the feature on many devices. =3

joking 5 days ago||||
the main reason is probably that the chip is already outputting the image in a lossy format, and if you reorder the pixels you must reencode the image which means degrading the image, so it's much better to just change the exif orientation.
lsaferite 5 days ago|||
> the chip is already outputting the image in a lossy format

Could you explain this one?

DidYaWipe 5 days ago|||
Image sensors don't "output images in a lossy format" as far as I know.
account42 4 days ago|||
Burst mode in cameras means the sensor is readout is buffered in RAM while the encoding and writing to persistent storage catches up. Rotating the buffer would be part of the latter and not affect burst speed - and is an insignificant cost anyway.
andsoitis 6 days ago|||
There is no standard field to record readouts of a camera's accelerometers or inertial navigation system.

Exif fields: https://exiv2.org/tags.html

bawolff 5 days ago|||
Personally i wish people just used XMP. Exif is such a bizarre fotmat. Its essentially embedding a tiff image inside a png.
jandrese 6 days ago|||
Yes, but websites frequently strip all or almost all Exif data from uploaded images because some fields are used by stalkers to track people down to their real address.
johnisgood 5 days ago|||
And I strip Exif data, too, intentionally, for similar reasons.
bspammer 5 days ago||
That makes sense to me for any image you want to share publicly, but for private images having the location and capture time embedded in the image is incredibly useful.
jandrese 5 days ago|||
If you are uploading it to a website you are sharing it. Even if the image is supposedly "private" you have to assume it will be leaked at some point. Remember, the cloud is just someone else's computer, and they can do what they want with their computer. They may also not be entirely competent at their job.
johnisgood 5 days ago||
Yes, once something has been shared (or stolen), you lost control over it, be it information or an image. EXIF data is fine, if it never leaves your device or if your device is not compromised.
johnisgood 5 days ago|||
If by private you mean "never shared", I agree.
sunaookami 5 days ago|||
That reminds me when I first uploaded a picture to some forum and it showed my full home address together with a map as a "feature"
account42 4 days ago||
It is a feature because now you are aware of what you are sharing and can potentially delete it before too many others see it.
joshvm 5 days ago|||
There is an acceleration field (Exif.Photo.Acceleration) and (Exif.Photo.CameraElevationAngle) for elevation but oddly not 3 axes. Similarly there are fields for ambient environmental conditions, but only whatever specific things the spec-writers considered.

You could store this in Exif.Photo.MakerNote: "A tag for manufacturers of Exif writers to record any desired information. The contents are up to the manufacturer." I think it can be pretty big, certainly more than enough for 9 DoF position data.

pezezin 5 days ago|||
Ages ago I worked on photogrammetry software, and the lack of such information was indeed painful for us. One of the most important parts of the processing pipeline is calculation the position and orientation of each camera; having at least the orientation would have made our life much easier.
Findecanor 5 days ago||
Does the meta-data have support for opting in/out of "AI training"?

And is being able to read an image without an opt-in tag something that has to be explicitly enabled in the reference implementation's API?

albert_e 6 days ago||
So animated GIFs can be replaced by Animated PNGs with alpha blending with transparent backgrounds and lossless compression! Some nostalgia from 2000s websites can be revived and relived :)

Curious if Animated SVGs are also a thing. I remember seeing some Javascript based SVG animations (it was a animated chatbot avatar) - but not sure if there is any standard framework.

andsoitis 6 days ago||
> Curious if Animated SVGs are also a thing.

Yes. Relevant animation elements:

• <set>

• <animate>

• <animateTransform>

• <animateMotion>

See https://www.w3schools.com/graphics/svg_animation.asp

shakna 5 days ago|||
Slightly related, I recently hit on this SVG animation bug in Chrome (that someone else found):

https://shkspr.mobi/blog/2025/06/an-annoying-svg-animation-b...

mattigames 6 days ago||||
Overshadowed by CSS animations for almost all use cases.
lawik 6 days ago|||
But animated gradient outlines on text is the only use-case I care about.
mattigames 5 days ago||
"Use case" is written without hyphen https://en.m.wikipedia.org/wiki/Use_case
WorldMaker 5 days ago|||
Hyphenation of multi-word nouns is a process in English that usually happens after some time of usage as separate words. It often happens before eventually merger into a single compound word noun. Such as: "Electronic Mail" to "E Mail" to "e-mail" to "email".

Given how often it is used as a jargon term in software development, I can absolutely see this usage of "use-case" here as a "vote" for the next step in the process. Will we eventually see "usecase" become common? It's possible. I think it might even be a good idea. I'm debating adding my own "votes" for the hyphen moving forward.

fkyoureadthedoc 5 days ago|||
I have to differentiate myself from LLMs by using words wrong though
account42 4 days ago|||
*in browsers

Most other SVG renderers don't support much CSS.

albert_e 5 days ago||||
Oh TIL - Thanks!

This could possibly be used to build full fledged games like pong and breakout :)

jerf 5 days ago||
SVG also supports Javascript, which will probably be a lot more useful for games.
dveditz_ 5 days ago||
It supports JavaScript when used as a document, but when used as an "image" by a browser (IMG tag, CSS features) JavaScript and the loading of external resources are disabled.
riffraff 6 days ago|||
I was under the impression many gifs these days are actually served as soundless videos, as those basically compress better.

Can animated PNG beat av1 or whatever?

layer8 5 days ago|||
APNG would be for lossless compression, and probably especially for animations without a constant frame rate. Similar to the original GIF format, with APNG you explicitly specify the duration of each individual frame, and you can also explicitly specify looping. This isn’t for video, it’s more for Flash-style animations, animated logos/icons [0], or UI screen recordings.

[0] like for example these old Windows animations: https://www.randomnoun.com/wp/2013/10/27/windows-shell32-ani...

fc417fc802 5 days ago||
All valid points, however AV1 also supports lossless compression and is almost certainly going to win the file size competition against APNG every time.

https://trac.ffmpeg.org/wiki/Encode/AV1#Losslessencoding

meindnoch 5 days ago|||
False, or misleading.

The AV1 spec [1] does not allow RGB color spaces, therefore AV1 cannot preserve RGB animations in a bit-identical fashion.

[1] https://aomediacodec.github.io/av1-spec/av1-spec.pdf

pornel 5 days ago||
AV1 supports YCoCg, which encodes RGB losslessly.

It is a bit-reversible rotation of the RGB cube. It makes the channels look more like luma and chroma that the codec expects.

meindnoch 5 days ago||
False.

8-bit YCoCg (even when using the reversible YCoCg-R [1] scheme) cannot represent 8-bit RGB losslessly. The chroma channels would need 9 bits of precision to losslessly recover the original 8-bit RGB values.

[1] https://www.microsoft.com/en-us/research/wp-content/uploads/...

pornel 4 days ago||
AVIF supports 10 and 12 bit encoding, which losslessly fits the 9-bit rotation of 8-bit data.

It's also possible to directly encode RGB (channels ordered as GBR) when you set identity matrix coefficients, it's just less efficient.

I've implemented this in my AVIF encoder, so I know what I'm saying.

meindnoch 3 days ago||
Show me any of the popular image conversion tools (avifenc, imagemagick, photoshop, ffmpeg, whatever...) that does the identity matrix hack when asking for lossless AVIF. None of them do it. Many people have been burned by "lossless" AVIF, where they converted their images in the mistaken belief that the result will be bit-identical to the original, only to find out that this wasn't the case, after they've deleted the original files.
fc417fc802 3 days ago||
That's shifting the goalposts from what the standard supports to the current state of the ecosystem. It's certainly an interesting point though. If common implementations all have bugs regarding lossless encoding that's a pretty bad situation.
account42 4 days ago|||
> is almost certainly going to win the file size competition against APNG every time

For video content maybe. Pixel-art gifs are not something video codecs do well at without introducing lots of artifacts.

fc417fc802 4 days ago||
Artifacts? We're talking about lossless compression here. There aren't any artifacts by definition.
account42 4 days ago||||
Soundless videos cannot be used in environments that expect an image like embeds in forums and similar.

It's a shame that browser vendors didn't add silent looping video support to the img tag over (imo) baseless concerns.

armada651 5 days ago||||
> Can animated PNG beat av1 or whatever?

Animated PNGs can't beat GIF nevermind video compression algorithms.

Aissen 5 days ago|||
> Animated PNGs can't beat GIF nevermind video compression algorithms.

Not entirely true, it depends on what's being displayed, see a few simple tests specifically constructed to show how much better APNG can be vs GIF and {,lossy} webp: http://littlesvr.ca/apng/gif_apng_webp.html

Of course I don't think it generalizes all that well…

armada651 5 days ago|||
You're correct and I was considering adding a footnote that if you use indexed colors like a GIF then PNG can beat GIF due to better compression algorithms. But when most people think of APNG they think of lossless compression rather than lossy compression.
account42 4 days ago||
Indexed can be lossless when the source already uses few colors, e.g. because you want to improve the compression of an existing GIF or limit colors for stylistic choice (common in pixel art).
bmacho 5 days ago|||
I tried these examples on ezgif, and indeed apng manages to be smaller than webp every single time. Weird, I was under the impression that webp was almost always smaller? Is this because GIF images are already special, or apng uses better compression than png?

edit: using the same ezgif webp and apng on a H.264 source, apng is suddenly 10x the size than webp. It seems apng is only better if the source is gif

fc417fc802 5 days ago|||
I would guess that apng only wins when indexed colors can be used. That guess would match what you saw using an h264 file for the source.
Aissen 5 days ago||||
I have no idea! I actually hoped someone would show a much more comprehensive and serious benchmark in response, but that has failed to materialize.
account42 4 days ago|||
Almost like video codecs and animated images are different niches that optimize for different content.
jeroenhd 5 days ago|||
Once you add more than 256 different colours in total, GIF explodes in terms of file size. It's great for small, compact images with limited colour information, but it can't compete with APNG when the image becomes more detailed than what you'd find on Geocities.
pornel 5 days ago||
No, APNG explodes in size in that case.

In APNG it's either the same 256 colors for the whole animation, or you have to use 24-bit color. That makes the pixel data 3 times larger, which makes zlib's compression window effectively 3 times smaller, hurting compression.

OTOH GIF can add 256 new colors with each frame, so it can exceed 256 colors without the cost of switching all the way to 16.7 million colors.

bawolff 5 days ago||||
Its also because people like to "pause" animations, and that is not really an option with apng & gif.
bigfishrunning 5 days ago||
why not? that's up to the program displaying the animation, not the animation itself -- i'm sure a pausable gif or apng display program is possible
pornel 5 days ago|||
It's absolutely possible. Browsers even routinely pause playback when images aren't visible on screen.

They just don't have a proper UI and JS APIs exposed, and there's nothing stopping them from adding that.

IMO browsers are just stuck with tech debt, and maintainin a no-longer-relevant distinction between "animations" and "videos". Every supported codec should work wherever GIF/APNG work and vice versa.

It's not even a performance or complexity issue, e.g. browsers support AVIF "animations" as images, even though they're literally fully-featured AV1 videos, only wrapped in a "pretend I'm an image" metadata.

nextaccountic 5 days ago|||
> They just don't have a proper UI and JS APIs exposed, and there's nothing stopping them from adding that.

Browsers should just allow animated gifs and apngs in <video>

account42 4 days ago||
More important would be to allow (silent) videos in <img>.
joquarky 5 days ago|||
I wish browsers still paused all animations when the user hits the Esc key. It's hard to read when there are distracting animations all over most pages.
account42 4 days ago|||
Browsers used to support pausing GIFs by pressing the escape key.
josephg 5 days ago||||
I doubt it, given png is a lossless compression format. For video thats almost never what you want.
DidYaWipe 5 days ago||
For animations with lots of regions of solid color it could do very well.
josephg 5 days ago||
So do most other video formats. I'm not really seeing any advantages, and I see a lot of disadvantages vs h264 and friends.
account42 4 days ago||
Not without lots of artifacts.
fc417fc802 5 days ago|||
> many gifs these days are actually served as soundless videos

That's not really true. Some websites lie to you by putting .gif in the address bar but then serving a file of a different type. File extensions are merely a convention and an address isn't a file name to begin with so the browser doesn't care about this attempt at end user deception one way or the other.

faceplanted 5 days ago||
You said that's not really true and the described exactly how it's true, what did you mean?
fc417fc802 5 days ago||
I parsed the comment as something along the lines of clever hackers somehow stuffing soundless videos into gif containers which is most certainly not what is going on. I was attempting to convey that they have nothing to do with gifs. Gifs are not involved anywhere in the process.

I'm not sure why people disagree so strongly with what I wrote. Worst case scenario is that it's a slightly tangential but closely related rant about deceptive web design practices. Best case scenario is that someone who thought some sort of fancy trick involving gifs was in use learns something new.

chithanh 5 days ago|||
When it comes to converting small video snippets to animated graphics, I think WEBP was much better than APNG from the beginning. Only if you use GIF as intermediate format then APNG was competitive.

Nowadays, AVIF serves that purpose best I think.

account42 4 days ago||
webm or any other non-gimped video codec would be a much better format for that use case. Unfortunately browsers don't allow those in image contexts so we are stuck with an inferior "state of the art" literally-webm-with-deliberately-worse-compression webp standard.

AVIF is only starting to become widespread so can't be used without fallback if you care about your users. Not sure how it compares to AV1 quality/compression wise but hopefully its not as gimped as webp and there will encoders that aren't as crap as libwebp that almost everyone uses.

chithanh 4 days ago||
> Unfortunately browsers don't allow those in image contexts

The fact that we have the <img> element at all is bad. HTML has since the early days a perfectly capable <object> which can even be nested to provide fallback, but browser support was always spotty.

The Acid2 test famously used <object> to shame browser vendors into supporting it at least to some extent.

bmacho 5 days ago|||
> Curious if Animated SVGs are also a thing.

SVG is just html5, it has full support for CSS, javascript with buttons, web workers, arbitrary fetch requests, and so on (obviously not supported by image viewers or allowed by browsers).

bawolff 5 days ago|||
Browsers support all that sort of thing, as long as you use an iframe. (Technically there are sone subtle differences between that and html5, but you are right its mostly the same)

If you use an <img> tag, svgs are loaded in "restricted" mode. This disables scripting and external resources. However animation via either SMIL or CSS is still supported.

account42 4 days ago||
And non-browser image renders support almost none of those advanced totally-still-SVG features (and I don't blame them) while they often do support animated GIFs.
vorgol 5 days ago|||
It nearly got raw socket support back in the day: https://news.ycombinator.com/item?id=35381755
theqwxas 5 days ago|||
Some years ago I've used the Lottie (Bodymovin?) library. It worked great and had a nice integration: you compose your animation in Adobe After Effects, export it to an svg plus some json, and the lottie JS script would handle the animation for you. Anything else with (vector, web) animations I've tried is missing the tools or the DX for me to adopt. Curious to hear if there are more things like this.

I'm not sure about the tools and DX around animated PNGs. Is that a thing?

qingcharles 5 days ago|||
Almost nowhere that supports uploading GIFs supports APNG or animated WEBP. The back end support is so low it's close to zero. Which is really frustrating.
extraduder_ire 5 days ago||
Do you mean services that reencode gif files to webm/mp4? apng just works everywhere that png works, and will remain animated as long as it's not re-encoded.

You can even have one frame that gets shown if and only if animation is not supported.

qingcharles 4 days ago||
Yes, most places only show the first frame. They ignore the animation, sadly. Even while accepting GIFs.
extraduder_ire 3 days ago||
That's not the first frame, it's the fallback image that png decoders which are unaware of apng decode.

It is never shown by compliant apng decoders. You can make it the first frame of the animation, or any other image you want. e.g. some text saying "APNG unsupported"

jonhohle 5 days ago|||
It seems crazy to think about, but I interviewed with a power company in 2003 that was building a web app with animated SVGs.
account42 4 days ago|||
> So animated GIFs can be replaced by Animated PNGs with alpha blending with transparent backgrounds and lossless compression!

Not progressively though unless browsers add a new mime type for it which they did not bother to do with animated webp.

jokoon 5 days ago|||
both GIF and PNG use zipping for compressing data, so APNG are not much better than GIF
bawolff 5 days ago|||
PNG uses deflate (same as zip) but GIF uses LZW. These are different algorithms. You should expect different compression results i would assume.
account42 4 days ago||
ZIP is theoretically a generic container and theoretically supports a number of different compression formats. Stored (no compression) and deflate are the only ones you can count on being supported everywhere though so in practice you're not wrong.
Calzifer 5 days ago||||
(A)PNG supports semi-transparency. In GIF a pixel is either full transparent or full opaque.

Also while true color gifs seem to be possible it is usually limited to 256 colors per image.

For those reasons alone APNG is much better than GIF.

account42 4 days ago||
> Also while true color gifs seem to be possible it is usually limited to 256 colors per image.

No, it's limited to 256 colors per frame and frames can have duration 0 which allows you to combine multiple frames into more than 256 color images.

0points 5 days ago|||
Remember when we unwillingly trained the generative AI:s of our time with an endless torrent of factoids?
qwertfisch 5 days ago||
Seems a bit too late? And also, JPEG XL supports all the features and uses already advanced compression (finite-state entropy, like ZStandard). It offers lossy and lossless compression, animated pictures, HDR, EXIF etc.

There is just no need for a PNG update, just adopt JPEG XL.

bmn__ 5 days ago||
> just

https://caniuse.com/jpegxl

No one can afford to "just". Five years later and it's only one browser! Crazy.

Browser vendors must deliver, only then it's okay to admonish an end user or Web developer to adopt the format.

Dylan16807 5 days ago||
Adopt it anyway. Add a decoder. Don't let google bully you out of such a good format.
Dwedit 4 days ago|||
If JPEG-XL decompressed faster, I'd use it more. For now, I'm sticking with WEBP for lossless, and AVIF for lossy. AVIF's CDEF filter (directional deringing) works wonders, and it's too bad that JPEG-XL lacks such a filter.

JPEG-XL's lossy modular mode is a very unique feature which needs a lot more exposure than it has. It is well-suited to non-photographic drawings or images that aren't continuous, and have never touched any JPEG-like codecs. It has different kinds of artifacts than what you typically see in a DCT image codec. Rather than ringing, you get slight pixellation.

stgn 4 days ago||
> and it's too bad that JPEG-XL lacks such a filter

JPEG XL has an edge-preserving filter ("EPF") for the purpose of reducing ringing.

Aachen 5 days ago|||
> advanced compression (finite-state entropy, like ZStandard)

I've not tried it on images, but wouldn't zstandard be exceedingly bad at gradients? It completely fails to compress numbers that change at a fixed rate

Bzip2 does that fine, not sure why https://chaos.social/@luc/114531687791022934 The two variables (inner and outer loop) could be two color channels that change at different rates. Real-world data will never be a clean i++ like it is here, but more noise surely isn't going to help the algorithm compared to this clean example

wongarsu 5 days ago|||
PNG's basic idea is to store the difference between the current pixel and the pixel above it, left of it or to the top-left (chosen once per row), then apply standard deflate compression to that. The first step basically turns gradients into repeating patterns of small numbers, which compress great. You can get decent improvements by just switching deflate for zstd
Aachen 4 days ago||
> You can get decent improvements by just switching deflate for zstd

Maybe I sounded too critical of zstd. To be clear: I use it for general-purpose compression where available, the only exception would be where eeking out the last % gain is important and slow decompression is acceptable and it has one of these patterns that Bzip2 does better in the first place

That it's better than deflate (afaik aka gzip and zlib, just with different header fields) is not surprising since that was iirc the defined goal of Zstandard project

adgjlsfhk1 5 days ago||||
the FSE layer isn't responsible for finding these sorts of patterns in an image codec. The domain modeling turns that sort of pattern into repeated data and then the FSE goes to town on the output.
Retr0id 5 days ago|||
zlib/deflate already has the same issue. It is mitigated somewhat by PNG row filters.
mikae1 5 days ago|||
> There is just no need for a PNG update, just adopt JPEG XL.

Tell that to Google. They gave up on XL in Chrome[1] and essentially killed its adoption.

[1] https://issues.chromium.org/issues/40168998#comment85

pezezin 4 days ago|||
On the other hand, more and more software are adding support for JPEG XL. Photoshop just added it in the latest patch (https://helpx.adobe.com/photoshop/using/whats-new/2025-6.htm...), Apple has included it in the iPhone 16, it is easily available on Windows (https://apps.microsoft.com/detail/9mzprth5c0tb?hl=en-US&gl=U...), most major Linux distros already support it...

It is only a matter of time until the Chrome team has to reverse their decision.

rhet0rica 5 days ago|||
From reading that, "gave up" seems to mean "deliberately killed it so their own WebP2 wouldn't have competition." Behold the monopoly at the apex of its power.
account42 4 days ago|||
The really weird part is that both webp and jxl developments were largely funded by Google so its not Google killing a competitors format over their own but someone in one part of Google killing the format someone elsewhere in Google developed over their pet favorite.
rhet0rica 4 days ago||
There's no form of cloak-and-dagger BS more vicious than internecine BS.
spauldo 4 days ago|||
PNG went through that when Microsoft kept throwing incomplete and buggy support for it in IE. Hopefully something will come along to buck Chrome's monopoly and JPEG-XL will have its chance.
illiac786 5 days ago||
I really don’t get it. Why, but why? It’s already confusing as hell, why create yet another standard (variant) with no unique selling point?
pmarreck 5 days ago||
JPEG XL is not a "variant", it is a completely new algorithm that is also fully backwards-compatible with every single JPEG already out there, of which there are probably billions at this point.

It also has pretty much every feature desired in an image standard. It is future-proofed.

You can losslessly re-compress a JPEG into a JPEG-XL file and gain space.

It is a worthy successor (while also being vastly superior to) JPEG.

dylan604 5 days ago|||
> You can losslessly re-compress a JPEG into a JPEG-XL file and gain space.

Is that gained space enough to account for the fact you now have 2 files? Sure, you can delete the original jpg on the local system, but are you going to purge your entire set of backups?

illiac786 5 days ago|||
if you do not want to delete the original jpeg, there is no point in converting them to jpeg xl I would say.

Unless serving jxl and saving bandwidth, while increasing your total storage, is worth it to you.

account42 4 days ago|||
Yes the whole point of lossless re-compression is that you do not need to keep the original JPEGs. Of course you don't need to "purge" backups, just let them rotate out normally.

Also backup storage is usually cheaper than something that needs to have fast access speeds.

dylan604 4 days ago||
For people that shoot digital cameras saving as JPEG, it will a very cocky suggestion to tell them to toss out their camera original files!

You'll know JPEG-XL if real when camera manufactures allow for XL acquisition instead of legacy JPEG only.

BobaFloutist 5 days ago||||
Is there any risk that if I open a JPEG-XL in something that knows what a JPEG is but not what a JPEG-XL is and then save it, it'll get lossy compressed? Backwards compatibility is awesome, but I know that if I save/upload/share/copy a PNG, it shouldn't change without explicit edits, right?
illiac786 5 days ago||
a sw that does not know what jpeg xl is, will not be able to open jxl files. How would it?

Not sure what the previous poster meant with “backward compatible” here. jxl is a different format. It can include every information a jpeg includes, which then maybe qualifies as “backward compatible” but it still is a different format.

liuliu 5 days ago|||
JPEG XL has the mode that in layman's word, allow bit-by-bit round-trip with JPEG.

Original JPEG -> JPEG XL -> Recreated JPEG.

Sha256(Original JPEG) == Sha256(Recreated JPEG).

That's what people meant by "backward compatible".

colejohnson66 5 days ago||
That’s not “backwards compatible”, but “round tripable” or “lossless reencode”
illiac786 20 hours ago||
exactly, it is absolutely not backward compatible. It is lossless-at-bit-level conversion of JPEG, but that doesn’t help older SW in any way.
BobaFloutist 5 days ago|||
Ah, got it. I assumed it was a losselessly compressed JPEG with metadata telling modern software not to compress differently but that older software would open as a normal JPEG, but I guess they meant something else with "backward compatible".
pmarreck 3 days ago||
I guess I meant losslessly round-trippable. In other words, you can go from jpeg -> jxl -> jpeg without any loss in quality, potentially (although with jxl -> jpeg -> jxl, you will lose space while it is a jpeg, and you'd probably have to pick a high compression quality in order to not lose information... you may also lose information such as metadata that jxl accommodates but jpeg does not, like transparency)

So backwards-compatible in the sense that the jpeg-xl algorithm spec can read jpg and store the same pixel data more efficiently as jxl if you like. You gain space and lose nothing (except perhaps encode/decode speed).

illiac786 5 days ago|||
I was referring to the new PNG, not to JPEG XL.
sdenton4 5 days ago||
Looking at TFA, it's placing in the spec a few things that are already widely stacked onto the format (such as animation). This is a very sensible update, and backwards compatible with existing PNG.
illiac786 5 days ago||
Not sure expanding PNG capabilities is sensible, looking at the overall landscape of image formats.
dveditz_ 5 days ago||
The capabilities are already expanded in most common implementations. This update is largely blessing those features as officially "standard".
cptcobalt 5 days ago||
It seems like this new PNG spec just cements what exists already, great! The best codecs are the ones that work on everything. PNG and JPEG work everywhere, reliably.

Try opening a HEIC or AV1 or something on a machine that doesn't natively support it down to the OS-level, and you're in for a bad time. This stuff needs to work everywhere—in every app, in the OS shell for quick-looking at files, in APIs, on Linux, etc. If a codec does not function at that level, it is not functional for wider use and should not be a default for any platform.

ecshafer 5 days ago||
I work with a LOT of images in a lot of image formats, many including extremely niche formats used in specific fields. There is a massive challenge in really supporting all of these, especially when you get down to the fact that some specs are a little looser than others. Even libraries can be very rough, since sure it says on the tin it supports JPG and TIF and HEIC... but does it support a 30GB Jpeg? Does it support all possibly meta data in the file?
lazide 5 days ago||
This new spec will make PNG even worse than HEIC or AV1 - you won’t know what codec is actually inside the PNG until you open it.
hulitu 5 days ago||
> you won’t know what codec is actually inside the PNG until you open it.

But this is a feature. Think about all those exploits made possible by this feature. Sincerely, the CIA, the MI-6, the FSB, the Mossad, etc.

lazide 4 days ago||
The more practical concern is that like AVI you can’t tell if you can read it until you try, which makes it a nightmare especially with codec rot.
ggm 6 days ago||
Somebody needs to manage human time/date approximates in a way other people in s/w will align to.

"photo scanned in 2025, is about something in easter, before 1940 and after 1920"

luguenth 5 days ago||
In EXIF, you have DateTimeDigitized [0]

For ambiguous dates there is the EDTF Spec[1] which would be nice to see more widely adopted.

[0] https://www.media.mit.edu/pia/Research/deepview/exif.html

[1] https://www.loc.gov/standards/datetime/

ggm 5 days ago||
I remember reading about this in a web forum mainly for dublin core fanatics. Metadata is fascinating.

Different software reacts in different ways to partial specifications of yyyy/mm/dd such that you can try some of the cute tricks but probably only one s.w. package honours it.

And the majors ignore almost all fields other than a core set of one or two, disagree about their semantics, and also do wierd stuff with file name and atime/mtime.

SchemaLoad 5 days ago||
The issue that gets me is that Google Photos and Apple photos will let you manually pick a date, but they won't actually set it in the photo EXIF, so when you move platforms. All of the images that came from scans/sent without EXIF lose their dates.
ggm 5 days ago|||
It's in sidecar files. Takeout gets them, some tools read them.
kccqzy 5 days ago||
But there is no standardization of sidecar files, no? Whereas EXIF is pretty standard.
jeroenhd 5 days ago|||
EXIF inside of PNGs is new. You can make it work by embedding structured chunks into the file, but it's not official in any way (well, not until the new spec, at least). Sidecar files have some kind of interoperable format that at least don't break buggy PNG parsers when you open the image file. The sidecar files themselves differ in format, but at least they're usually formatted according to their extension.

The usual sidecar files, XMP files, are standardised (in that they follow a certain extensible XML structure) and can (and often do) include EXIF file information.

SchemaLoad 5 days ago||
Pretty much all the photos in Apple/Google photos are going to be JPEG and HEIF which do support EXIF. But both services basically will not touch what came out of the camera at all. If you add a description or date, it gets stored externally to the image so when you export your data, those changes are lost. Or they get dumped in a JSON file requiring you to use some custom script to handle it.
account42 4 days ago||
Not touching the image for metadata changes is a good thing as that makes backups more efficient/simpler. Embedded metadata is also a security issue as users may share more information than they realize which is why it is common to strip it automatically in many places.
account42 4 days ago|||
XMP [0] is a standard but no idea of Google and Apple use it (Darktable does). You could also store EXIF data as sidecar files but I don't think that has better support.

[0] https://en.wikipedia.org/wiki/Extensible_Metadata_Platform

mbirth 5 days ago|||
IIRC osxphotos has an option to merge external metadata into the exported file.
LegionMammal978 6 days ago||
Reading the linked blog post on the new cICP chunk type [0], it looks like the "proper HDR support" isn't something that you couldn't already do with an embedded ICC profile, but instead a much-abbreviated form of the colorspace information suitable for small image files.

[0] https://svgees.us/blog/cICP.html

cormorant 5 days ago||
"common but not representable RGB spaces like Adobe 1998 RGB or ProPhoto RGB cannot use CICP and have to be identified with ICC profiles instead."

cICP is 16 bytes for identifying one out of a "list of known spaces" but they chose not to include a couple of the most common ones. Off to a great start...

I wonder if it's some kind of legal issue with Adobe. That would also explain why EXIF / DCF refer to Adobe RGB only by the euphemism "optional color space" or "option file". [1]

[1] https://en.wikipedia.org/wiki/Design_rule_for_Camera_File_sy...

spider-mario 21 hours ago||
CICP (H.273) originates from the video world, where Adobe RGB and ProPhoto RGB really aren’t common.
ProgramMax 5 days ago|||
PNG previously supported ICC v2. That was updated to ICC v4. However, neither of these are capable of HDR.

Maybe iccMAX supports HDR. I'm not sure. In either case, that isn't what PNG supported.

So something new was required for HDR.

LegionMammal978 5 days ago||
> However, neither of these are capable of HDR.

How so? As far as I can tell, the ICCv2 spec is very agnostic as to the gamut and dynamic range of the output medium. It doesn't say anything to the extent of "thou shalt not produce any colors outside the sRGB gamut, nor make the white point too bright".

Unless HDR support is supposed to be something other than just the primaries, white point, and transfer function. All the breathless blogspam about HDR doesn't make it very clear what it means in terms of colorspaces.

ProgramMax 5 days ago||
IIRC (been a while), the reason was ICCv2/v4 still requires a gamma function. And PQ is not a gamma function. Maybe they can cover HLG, but if we want to represent any given HDR content, we needed something more than ICCv2/v4.
LegionMammal978 5 days ago||
That doesn't sound quite right to me. ICCv2's 'curveType' gives the option of a full lookup table instead of a simple gamma function. Maybe it has to do with ICCv2 saying that the reference viewing condition has an illumination level of 500 lx for the perceptual intent? (But how does that apply to non-reflective media?)

I don't doubt that there's lots of problems in the chain from RGB samples to display output, but I'm finding this whole thing horribly confusing. Wikipedia tries to distinguish 'HDR' transfer functions like PQ [0] from 'SDR' transfer functions in terms of their absolute luminance, but the ICC specs are just filled with relative values all the way down.

(Not to mention how much these things get fiddled with in practice. Once, I had the idea of writing a JPEG decoder, so I looked into how exactly to convert between sRGB and Rec. 601 YCbCr coordinates. I thought, "I know, I'll just use the standard-defined XYZ conversions to bridge between them!" But psych, the ICC sRGB profile has its own black point scaling that the standards don't tell you about. I'm still not sure what the correct answer is for "these sRGB coordinates represent the exact same color as these Rec. 601 YCbCr coordinates".)

[0] https://en.wikipedia.org/wiki/Perceptual_quantizer

ProgramMax 5 days ago||
Agreed that it gets confusing. That's a piece of why I'm unable to give you a solid answer. This isn't my area of expertise.

Here is what I can tell you confidently: The original plan was to provide an ICC profile that approximates PQ as best as we could. But it wasn't enough. So the proposal was to force the profile name to be a special string. When a PNG decoder saw that name, it would ignore the ICC profile and do actual PQ.

Here is that original proposal: https://w3c.github.io/png-hdr-pq/

Possibly more context (I just found this) from Apple. I'm not sure of date: https://www.color.org/hdr/02-Luke_Wallis.pdf Slide 29: "HDR parametric transfer functions not in ICC spec Parametric 3D tone mapping functions not in ICC spec - Neither can be approximated by 1-D or 3-D LUTs"

I'm not sure why they cannot be approximated by LUT. Maybe because of the inversion problem?

LegionMammal978 4 days ago||
Thanks for that proposal link. The email thread starting at [0] seems to explain some of the challenges. My understanding:

- In ICC-land, all luminances are relative to the display's (or reflective medium's) black and white points. So for an HDR-capable display, all content, HDR or SDR, would be naturally displayed at the full 10k nits or whatever the actual number is. This is obviously not how things work in practice: OSes and/or displays really want a signal as to whether the full HDR luminance is actually desired. (This reminds me of an earlier HN thread where people complained about HDR video forcing up the brightness on Apple devices.)

- PQ (but not HLG) specifies everything in terms of absolute luminance, but this gets confusing when people want to adjust their display brightness and have everything work relatively in practice.

- Due to lack of support for "overrange" behavior [1], 1D LUTs + matrices are insufficient for representing PQ at all, so you need a 3D LUT just to approximate it. This needs ICCv4, since ICCv2 only supports 3D LUTs for non-display profiles.

- But 3D LUTs are big and fat, and can only give a few bits of accuracy across some parts of the full HDR range. (It seems like there's no form of delta compression?) Most people really hate this. iccMAX can allegedly use 3D parametric formulas, but literally no one implements it since it has a million bells and whistles.

- More importantly, GPUs especially hate big fat LUTs, and everyone uses GPU rendering. In the worst case, some implementations will do everything they can to ignore LUTs in ICC profiles, and instead try to guesstimate some simple-gamma or linear-gamma approximation, which won't end well.

So it does seem to be a combination of "the HDR stack is a mess and needs its own special signaling" and practical concerns about avoiding overly huge profiles.

[0] https://lists.w3.org/Archives/Public/public-colorweb/2017May...

[1] https://lists.w3.org/Archives/Public/public-colorweb/2017May...

ProgramMax 4 days ago||
You....are wonderful. Thank you.
account42 4 days ago||
Unfortunately that seems to mean that the backwards compatibility here is washed out preview instead of limited-to-sRGB.
rynop 5 days ago|
This is a false claim in the PR:

> Many of the programs you use already support the new PNG spec: ... Photoshop, ...

Photoshop does NOT support APNGs. The PR calls out APNg recognition as the 2nd bullet point of "What's new?"

Am I missing something? Seems like a pretty big mistake. I was excited that an art tool with some marketshare finally supported it.

ProgramMax 5 days ago|
Phoptoshop supports the HDR part. But you are right, it does not support the APNG part.
More comments...