Not backwards compatible.
We just add it to that nice cupboard "great advanced image formats we will forget about".
Society doesn't need a new image format. I'd wager to say not any new multimedia format. Big corporate entites do, and have churning them out at a steady pace.
Look at poor webp - a format pushed by the largest industry players - and the abysmal everyday use it gets, and the hate it generates.
lioeters · 16m ago
> Not backwards compatible
They say it's technically compatible since older image decoders should recognize the PNG file is using a different compression algorithm than the default.
> Many programs already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
This is intentionally ignoring the fact that there are countless PNG decoders out in the wild, many using libpng the standard decoder last updated 6 years ago; and they will not be able to read the new PNG v2 files.
They should have used a different file extension, PNG2, to distinguish this incompatible format. Otherwise, users will be confused why their newly saved PNG file cannot be read by certain existing programs.
JKCalhoun · 11m ago
Many Mac apps do not consider the file extension but instead look for marker bytes within the file. While the Finder might use the extension to determine which app to launch ("Oh, an image file, let's open Preview") the app that is passed the file (Preview) will then look for various marker bytes to decide if it is a JPEG, PNG, etc.
(I am told by a certain LLM that the first 8 bytes of a PNG are the marker bytes: "89 50 4E 47 0D 0A 1A 0A". This is apparently in libpng itself ... so perhaps any OS or tool updating to a newer pnglib will get the new format for free?)
No comments yet
michaelmior · 2h ago
> and the abysmal everyday use it gets
Estimates are that 95% of Internet users have a browser that supports WebP and that ~25% of the top million websites serve WebP images. I wouldn't call that abysmal.
Geezus_42 · 1h ago
Great, so I can download it, but then I have to convert it to a different format before half my apps will be able to use it.
wltr · 1h ago
Maybe the issue is with your operating system then?
jdiff · 1h ago
App support has very little to do with the operating system. OSes by and large will preview it just fine.
dinkblam · 42m ago
on the contrary. on macOS apps don't have to support image (or movie) formats. it is done by the system and transparently handled by the APIs. apps automatically gain new formats when the system adds it.
reaperducer · 36m ago
The unfortunate side effect of this convenience is that apps automatically lose image support when macOS chases to no longer support them, too.
One example is Sony's SRF camera raw format.
Programs like Photoshop and Affinity have to bring their own decoders where previously none were required.
dspillett · 22m ago
And having to bring in support for formats that are deprecated by the OS, if they decide to keep supporting that format as there is sufficient demand from their users, is worse than having to bring in support for all formats rather than getting support from the OS?
Having ask that in a slightly confrontational way, one of the reasons I started using VLC all those years ago, and still use it to this day, was having trouble with other media players that relied on OS support fail to work well (or at all) with some codecs, while VLC brought support for them, and their dog, built-in and reliable. Dragging your own format support libraries with you can be beneficial.
echelon · 11m ago
You can't use webp on Reddit, Instagram, and hundreds of other websites. Which is ironic because some of them serve images as webp.
AlienRobot · 1h ago
You can't even upload webp to instagram.
bastawhiz · 1h ago
Which makes sense for an app made for photos: why would you capture a photograph to disk in a format made for distributing on the web?
jdiff · 1h ago
Indeed, why might one upload a photo to the web in a format made for distributing images on the web?
Sharlin · 2m ago
Instagram hasn't even been primarily or even secondarily about photos for a long time. Indeed trying to "just" upload a photo is made super inconvenient these days.
hsbauauvhabzb · 2h ago
My file manager can’t handle them but my browser can.
Edit: and good luck uploading the format to the majority of webforms that aren’t faang.
debugnik · 56m ago
Not even Google supports webp uploads in many of their web apps, and it's their format.
chillingeffect · 12m ago
Could it be a lack of resources? Or some missing expertise? Maybe they could find some interns who are familiar with it? Maybe the entire world is so obsessed w AI, we don't even care about image formats anymore.
upcoming-sesame · 29m ago
If you are using an image optimization service like Imgix / Cloudflare Image Resizing then it doesn't really matter, image can be uploaded as any supported format and will be sent to the end user according to their "Accept" header
dotancohen · 2h ago
5% of people can't view them, yet 25% of top websites use them?
In what other industry would it be considered acceptable to exclude 5% of visitors/users/clients?
pchangr · 1h ago
I can tell you, I have personally worked with a global corporation and we estimated that for one of their websites, supporting the 3% that we exclude by using “modern standards” would be more costly than the amount of revenue they get from them. So in that case, it was a rational decision. And up to the 10% cut, management just didn’t want to do the extra investment. So if something falls below that 10% threshold, they just don’t care to get it fixed.
pasc1878 · 47m ago
Any industry.
e.g. cars - not everyone is physically able to drive
books - blind people can't read
music - deaf people can't hear
It is a form of 80/20 or 90/10 rule the last small percentage costs as much as the majority.
mlok · 2h ago
Maybe they offer alternatives to webp for those 5% ?
> 5% of people can't view them, yet 25% of top websites use them?
That's not how it works.
The server declares what versions of media it has, and
the client requests a supported media format. The same trick have been used for audio and video for ages too.
This problem was solved by HTTP since forever. Client sends `Accept` header with supported formats and server selects the necessary content with corresponding `Content-Type` header. You don't need any HTML tags for it.
allendoerfer · 1h ago
What about file extensions?
georgyo · 58m ago
File extensions are just a hint about what the file might be and have nothing to do with what the file actually is. If the server sets the MIME type, the browser will use that as the hint.
But even beyond that, most file formats have a bit of a header at the start of the file that declares the actual format of the file. Browsers already can understand that and use the correct render for a file without an extension.
jdiff · 1h ago
Sometimes respected, largely ignored. URLs very often don't map directly to files served.
sjsdaiuasgdia · 2h ago
Not all businesses are attempting to reach a market of "every internet user globally".
bawolff · 2h ago
Can the 5% view images at all? The number of web crawlers have exploded recently.
jdiff · 51m ago
Yes, but it's 2% that are still using browsers without full support for WebP according to caniuse, which takes its numbers from StatCounter.
Note that I'm looking at "all tracked," which excludes 2% "other" browsers in the data whose featureset is not known.
Hendrikto · 1h ago
> Momentum built, and additional parties became interested. […] we had representation from […] Adobe, Apple, BBC, Comcast / NBCUniversal, Google, MovieLabs, and […] W3C
> Many […] programs […] already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
> Plus, you saw some broadcast companies in that list above. Behind the scenes, hardware and tooling are being updated to support the new PNG spec.
127 · 24m ago
There's a big issue that all old popular image formats are 8-bits. 10-bits or even 12-bits would help a lot with storing more information and maintaining editability.
londons_explore · 17m ago
If adding more bits to an image format, please make it 'n bit'. Ie. the file could be 8 bit, it could be 10, it could be 12, it could be 60 bit!
Whilst we're at it, please get rid of RGB and make it N channels too.
Libraries can choose to render that into a 3 channel, 8 bit buffer for legacy applications - but the data will be there for CMYK or HDR, or depth maps, or transparency, or focus stacking, or any other future feature!
Etheryte · 2h ago
I don't really think this is the case here. All major browsers already support the new spec for example. This isn't a case of oh we'll have support for it eventually, it's already there.
qwertfisch · 22m ago
Seems a bit too late?
And also, JPEG XL supports all the features and uses already advanced compression (finite-state entropy, like ZStandard). It offers lossy and lossless compression, animated pictures, HDR, EXIF etc.
There is just no need for a PNG update, just adopt JPEG XL.
369548684892826 · 39m ago
A fun fact about PNG, the correct pronunciation is defined in the specification
Because the creator of gifs telling the world how he pronounced it made such a huge difference :)
Not sure I'll bother to reprogram myself from “png”, “pung”, or “pee-enn-gee”.
qwertox · 7h ago
> Officially supports Exif data
Probably the best news here. While you already can write custom data into a header, having Exif is good.
BTW: Does Exif have a magnetometer (rotation) and acceleration (gravity) field? I often wonder about why Google isn't saving this information in the images which the camera app saves. It could help so much with post-processing, like with leveling the horizon or creating panoramas.
Findecanor · 18m ago
Does the meta-data have support for opting in/out of "AI training"?
And is being able to read an image without an opt-in tag something that has to be explicitly enabled in the reference implementation's API?
Aardwolf · 7h ago
Exif can also cause confusion for how to render the image: should its rotation be applied or not?
Old decoders and new decoders now could render an image with exif rotation differently since it's an optional chunk that can be ignored, and even for new decoders, the spec lists no decoder recommendations for how to use the exif rotation
It does say "It is recommended that unless a decoder has independent knowledge of the validity of the Exif data, the data should be considered to be of historical value only.", so hopefully the rotation will not be used by renderers, but it's only a vague recommendation, there's no strict "don't rotate the image" which would be the only backwards compatible way
With jpeg's exif, there have also been bugs with the rotation being applied twice, e.g. desktop environment and underlying library both doing it independently
DidYaWipe · 5h ago
The stupid thing is that any device with an orientation sensor is still writing images the wrong way and then setting a flag, expecting every viewing application to rotate the image.
The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
klabb3 · 47m ago
TIL, and hard agree (on face value). I’ve been struck by this with arbitrary rotation of images depending on application, very annoying.
What are the arguments for this? It would seem easier for everyone to rotate and then store exif for the original rotation if necessary.
ralferoo · 2h ago
One interesting thing about JPEG is that you can rotate an image with no quality loss. You don't need to convert each 8x8 square to pixels, rotate and convert back, instead you can transform them in the encoded form. So, rotating each 8x8 square is easy, and then rotating the image is just re-ordering the rotated squares.
mavhc · 4h ago
Because your non-smartphone camera doesn't have enough ram/speed to do that I assume (when in burst mode)
If a smartphone camera is doing it, then bad camera app!
Aardwolf · 3h ago
Rotation for speed/efficiency/compression reasons (indeed with PNG's horizontal line filters it can have a compression reason too) should have been a flag part of the compressed image data format and for use by the encoder/decoder only (which does have caveats for renderers to handle partial decoding though... but the point is to have the behavior rigorously specified and encoded in the image format itself and handled by exactly one known place namely the decoder), not part of metadata
It's basically a shame that the exif metadata contains things that affect the rendering
joking · 1h ago
the main reason is probably that the chip is already outputting the image in a lossy format, and if you reorder the pixels you must reencode the image which means degrading the image, so it's much better to just change the exif orientation.
Joel_Mckay · 4h ago
Most modern camera modules have built in hardware codecs like mjpeg, region of interest selection, and frame mirror/flip options.
This is particularly important on smartphones and battery operated devices. However, most smartphone devices simply save the photo the same way regardless of orientation, and simply add a display-rotated flag to the metadata.
It can be super annoying sometimes, as one can't really disable the feature on many devices. =3
andsoitis · 6h ago
There is no standard field to record readouts of a camera's accelerometers or inertial navigation system.
Yes, but websites frequently strip all or almost all Exif data from uploaded images because some fields are used by stalkers to track people down to their real address.
johnisgood · 5h ago
And I strip Exif data, too, intentionally, for similar reasons.
bspammer · 4h ago
That makes sense to me for any image you want to share publicly, but for private images having the location and capture time embedded in the image is incredibly useful.
johnisgood · 4h ago
If by private you mean "never shared", I agree.
bawolff · 2h ago
Personally i wish people just used XMP. Exif is such a bizarre fotmat. Its essentially embedding a tiff image inside a png.
albert_e · 7h ago
So animated GIFs can be replaced by Animated PNGs with alpha blending with transparent backgrounds and lossless compression! Some nostalgia from 2000s websites can be revived and relived :)
Curious if Animated SVGs are also a thing. I remember seeing some Javascript based SVG animations (it was a animated chatbot avatar) - but not sure if there is any standard framework.
This could possibly be used to build full fledged games like pong and breakout :)
theqwxas · 1h ago
Some years ago I've used the Lottie (Bodymovin?) library. It worked great and had a nice integration: you compose your animation in Adobe After Effects, export it to an svg plus some json, and the lottie JS script would handle the animation for you. Anything else with (vector, web) animations I've tried is missing the tools or the DX for me to adopt. Curious to hear if there are more things like this.
I'm not sure about the tools and DX around animated PNGs. Is that a thing?
riffraff · 7h ago
I was under the impression many gifs these days are actually served as soundless videos, as those basically compress better.
Can animated PNG beat av1 or whatever?
layer8 · 6h ago
APNG would be for lossless compression, and probably especially for animations without a constant frame rate. Similar to the original GIF format, with APNG you explicitly specify the duration of each individual frame, and you can also explicitly specify looping. This isn’t for video, it’s more for Flash-style animations, animated logos/icons [0], or UI screen recordings.
All valid points, however AV1 also supports lossless compression and is almost certainly going to win the file size competition against APNG every time.
Its also because people like to "pause" animations, and that is not really an option with apng & gif.
josephg · 6h ago
I doubt it, given png is a lossless compression format. For video thats almost never what you want.
DidYaWipe · 5h ago
For animations with lots of regions of solid color it could do very well.
armada651 · 5h ago
> Can animated PNG beat av1 or whatever?
Animated PNGs can't beat GIF nevermind video compression algorithms.
jeroenhd · 4h ago
Once you add more than 256 different colours in total, GIF explodes in terms of file size. It's great for small, compact images with limited colour information, but it can't compete with APNG when the image becomes more detailed than what you'd find on Geocities.
Aissen · 4h ago
> Animated PNGs can't beat GIF nevermind video compression algorithms.
Not entirely true, it depends on what's being displayed, see a few simple tests specifically constructed to show how much better APNG can be vs GIF and {,lossy} webp:
http://littlesvr.ca/apng/gif_apng_webp.html
Of course I don't think it generalizes all that well…
bmacho · 2h ago
I tried these examples on ezgif, and indeed apng manages to be smaller than webp every single time. Weird, I was under the impression that webp was almost always smaller? Is this because GIF images are already special, or apng uses better compression than png?
edit: using the same ezgif webp and apng on a H.264 source, apng is suddenly 10x the size than webp. It seems apng is only better if the source is gif
Aissen · 1h ago
I have no idea! I actually hoped someone would show a much more comprehensive and serious benchmark in response, but that has failed to materialize.
fc417fc802 · 4h ago
> many gifs these days are actually served as soundless videos
That's not really true. Some websites lie to you by putting .gif in the address bar but then serving a file of a different type. File extensions are merely a convention and an address isn't a file name to begin with so the browser doesn't care about this attempt at end user deception one way or the other.
faceplanted · 2h ago
You said that's not really true and the described exactly how it's true, what did you mean?
chithanh · 5h ago
When it comes to converting small video snippets to animated graphics, I think WEBP was much better than APNG from the beginning. Only if you use GIF as intermediate format then APNG was competitive.
Nowadays, AVIF serves that purpose best I think.
bmacho · 5h ago
> Curious if Animated SVGs are also a thing.
SVG is just html5, it has full support for CSS, javascript with buttons, web workers, arbitrary fetch requests, and so on (obviously not supported by image viewers or allowed by browsers).
bawolff · 2h ago
Browsers support all that sort of thing, as long as you use an iframe. (Technically there are sone subtle differences between that and html5, but you are right its mostly the same)
If you use an <img> tag, svgs are loaded in "restricted" mode. This disables scripting and external resources. However animation via either SMIL or CSS is still supported.
It seems crazy to think about, but I interviewed with a power company in 2003 that was building a web app with animated SVGs.
jokoon · 4h ago
both GIF and PNG use zipping for compressing data, so APNG are not much better than GIF
Calzifer · 1h ago
(A)PNG supports semi-transparency. In GIF a pixel is either full transparent or full opaque.
Also while true color gifs seem to be possible it is usually limited to 256 colors per image.
For those reasons alone APNG is much better than GIF.
bawolff · 2h ago
PNG uses deflate (same as zip) but GIF uses LZW. These are different algorithms. You should expect different compression results i would assume.
0points · 2h ago
Remember when we unwillingly trained the generative AI:s of our time with an endless torrent of factoids?
ksec · 1h ago
It is just a spec on something widely implemented already.
Assuming Next gen PNG will still require new decoder. They could just call it PNG2.
JPEG-XL already provides everything most people asked for a lossless codec. If there are any problems it is its encoding and decoding speed and resources.
I remember reading about this in a web forum mainly for dublin core fanatics. Metadata is fascinating.
Different software reacts in different ways to partial specifications of yyyy/mm/dd such that you can try some of the cute tricks but probably only one s.w. package honours it.
And the majors ignore almost all fields other than a core set of one or two, disagree about their semantics, and also do wierd stuff with file name and atime/mtime.
SchemaLoad · 6h ago
The issue that gets me is that Google Photos and Apple photos will let you manually pick a date, but they won't actually set it in the photo EXIF, so when you move platforms. All of the images that came from scans/sent without EXIF lose their dates.
ggm · 5h ago
It's in sidecar files. Takeout gets them, some tools read them.
LeoPanthera · 7h ago
> I know you all immediately wondered, better compression?. We're already working on that.
This worries me. Because presumably, changing the compression algorithm will break backwards compatibility, which means we'll start to see "png" files that aren't actually png files.
It'll be like USB-C but for images.
lifthrasiir · 7h ago
Better compression can also mean a new set of filter methods or a new interlacing algorithm. But yeah, any of them would cause an instant incompatibility. As noted in the relevant issue [1], we will need a new media type at the very least.
I am hopeful whatever better compression doesn't end up multiplying memory requirements, or increase burden on cpu, especially on decompression.
Now, PNG datatype for AmigaOS will need upgrading.
Arnt · 3h ago
I don't see why? If your video output is plain old RGB (like the Amiga hardware), then an unmodified decoder will handle new files without a problem. You only need a new decoder if your video output can handle more vivid colours than RGB can express.
Findecanor · 14m ago
An image decoded in the wrong colour space for the output will look wrong. It is not using extra bits to express the increased dynamic range: the existing numeric range is stretched and warped.
jillesvangurp · 4h ago
Old PNGs will work just fine. And forward compatibility is much less important.
The main use case for PNG is web browsers and all of them seem to be on board. Using old web browsers is a bad idea. You do get these relics showing up using some old version of internet explorer. But some images not rendering is the least of their problems. The main challenge is actually going to be updating graphics tools to export the new files. And teaching people that sRGB maybe isn't good enough any more. That's going to be hard since most people have no clue about color spaces.
Anyway, that gives everybody plenty of time to upgrade. By the time this stuff is widely used, it will be widely supported. So, you kind of get forward compatibility that way. Your browser already supports the new format. Your image editor probably doesn't.
hnlmorg · 3h ago
Browsers aren't the only software that work with PNGs. Far from it in fact.
AlienRobot · 1h ago
>The main use case for PNG is web browsers
This is news to me. I'm pretty sure the main use case for PNG is lossless transparent graphics.
Lerc · 7h ago
It has fields to say what compression is used. Adding another compression form should be handled by existing software as recognizing it as a valid PNG that they can't decompress.
The PNG format is specifically designed to allow software to read the parts they can understand and to leave the parts they cannot. Having an extensible format and electing never to extend it seems pointless.
koito17 · 6h ago
> Having an extensible format and electing never to extend it seems pointless.
This proves OP analogy regarding USB-C. Having PNG as some generic container for lossless bitmap compression means fragmentation in libraries, hardware support, etc. The reason being that if the container starts to support too many formats, implementations will start restricting to only the subsets the implementers care about.
For instance, almost nobody fully implements MPEG-4 Part 3; the standard includes dozens of distinct codecs. Most software only targets a few profiles of AAC (specifically, the LC and HE profiles), and MPEG-1 Layer 3 audio. Next to no software bothers with e.g. ALS, TwinVQ, or anything else in the specification. Even libavcodec, if I recall correctly, does not implement encoders for MPEG-4 Part 3 formats like TwinVQ. GP's fear is exactly this -- that PNG ends up as a standard too large to fully implement and people have to manually check which subsets are implemented (or used at all).
cm2187 · 4h ago
But where the analogy with USB-C is very good is that just like USB-C, there is no way for a user to tell from the look of the port or the file extension what the capabilities are. Which even for a fairly tech savvy user like me is frustrating. I have a bunch of cables, some purchased years ago, how do I know what is fit for what?
And now think of the younger generation that has grown up with smartphones and have been trained to not even know what a file is. I remember this story about senior high school students failing their school tests during covid because the school software didn't support heif files and they were changing the file extension to jpg to attempt to convert them.
I have no trust the software ecosystem will adapt. For instance the standard libraries of the .net framework are fossilised in the world of multimedia as of 2008-ish. Don't believe heif is even supported to this day. So that's a whole bunch of code which, unless the developers create workarounds, will never support a newer png format.
skissane · 1h ago
> there is no way for a user to tell from the look of the port or the file extension what the capabilities are
But that's typical for file extensions. Consider EXE – it is probably an executable, but an executable for what? Most commonly Windows – but which Windows version will this EXE run on? Maybe this EXE only works on Windows 11, and you are still running Windows 10. Or maybe you are running x86-64 Windows, but this EXE is actually for ARM or MIPS or Alpha. Or maybe it is for some other platform which uses that extension for executable files – such as DOS, OS/2, 16-bit Windows, Windows CE, OpenVMS, TOPS-10, TOPS-20, RSX-11...
.html, .js, .css – suggest to use a web browser, but don't tell you whether they'll work with any particular one. Maybe they use the latest features but you use an old web browser which doesn't support them. Maybe they require deprecated proprietary extensions and so only work on some really old browser. Maybe this HTML page only works on Internet Explorer. Maybe instead of UTF-8 it is in some obscure legacy character set which your browser doesn't support.
.zip – supports extensible compression and encryption methods, your unzip utility might not support the methods used to compress/encrypt this particular zip file. This is actually normal for very old ZIP files (from the 1980s) – early versions of PKZIP used various deprecated compression mechanisms, which few contemporary unzip utilities support. The format was extended to 64-bit without changing the extension, there's still a lot of 32-bit only implementations out there. ZIP also supports platform-specific file attributes–e.g. PKZIP for z/OS creates ZIP files which contain metadata about mainframe data storage formats, unzip on another platform is going to have no idea what it means, but the metadata is actually essential to interpreting the data correctly (e.g. if RECFM=V you need to parse the RDWs, if RECFM=F there won't be any)
.xml - okay, it is XML – but that tells you nothing about the actual schema. Maybe you were expecting this xml file to contain historical stock prices, but instead it is DocBook XML containing product documentation, and your market data viewer app chokes on it. Or maybe it really is historical stock prices, but you are using an old version of the app which doesn't support the new schema, so you can't view it. Or maybe someone generated it on a mainframe, but due to a misconfiguration the file came out in EBCDIC instead of ASCII, and your app doesn't know how to read EBCDIC, yet the mainframe version of the same app reads it fine...
.doc - people assume it is legacy (pre-XML) Microsoft Word: every version of which changed the file format, old versions can't read files created with newer versions correctly or at all, conversely recent versions have dropped support for files created in older versions, e.g. current Office versions can't read DOC files created with Word for DOS any more... but back in the 1980s a lot of people used that extension for plain text files which contained documentation. And it was also used by incompatible proprietary word processors (e.g. IBM DisplayWrite) and also desktop publishing packages (e.g. FrameMaker, Interleaf)
.xmi – I've seen this extension used for both XML Model Interchange (XML-based standard for exchanging UML diagrams) and XMIT (IBM mainframe file archive format). Because extensions aren't guaranteed to be unique, many incompatible file formats share the same extension
.com - is it an MS-DOS program, or is it DCL (Digital Command Language)?
.pic - probably some obscure image format, but there are dozens of possibilities
.img – could be either a disk image or a visual image, either way dozens of incompatible formats which use that extension
.db – nowadays most likely SQLite, but a number of completely incompatible database engines have also used this extension. And even if it is SQLite, maybe your version of SQLite is too old to read this file because it uses some features only found in newer versions. And even if SQLite can read it, maybe it has the wrong schema for your app, or maybe a newer version of the same schema which your old version that app doesn't support, or an old version of the schema which the current version of the app has dropped support for...
Calzifer · 50m ago
Just last week I had again some PDFs Okular could not open because of some more uncommon form features.
bayindirh · 5h ago
JPEG is no different. Only the decoder is specified. As long as the decoder decodes what you give it to the image you wanted to see, you can implement anything. This is how imgoptim/squash/aerate/dietJPG works. By (ab)using this flexibility.
Same is also true for the most advanced codecs. MPEG-* family and MP3 comes to my mind.
Nothing stops PNG from defining a "set of decoders", and let implementers loose on that spec to develop encoders which generate valid files. Then developers can go to town with their creativity.
cm2187 · 4h ago
Video files aren't a good analogy. Before God placed VLC and ffmpeg on earth, you had to install a galaxy of codecs on your computer to get a chance to read a video file and you could never tell exactly what codec was stored in a container, nor if you had the right codec version. Unfortunately there is no vlc and ffmpeg for images (I mean there is, the likes of imagemagick, but the vast majority of software doesn't use them).
fc417fc802 · 5h ago
I honestly don't see an issue with the mpeg-4 example.
Regarding the potential for fragmentation of the png ecosystem the alternative is a new file format which has all the same support issues. Every time you author something you make a choice between legacy support and using new features.
From a developer perspective, adding support for a new compression type is likely to be much easier than implementing logic for an entirely new format. It's also less surface area for bugs. In terms of libraries, support added to a dependency propagates to all consumers with zero additional effort. Meanwhile adding a new library for a new format is linear effort with respect to the number of programs.
7bit · 4h ago
I never once in 25 years encountered an issue with an mp4 Container that could Not be solved by installing either the divx or xvid codec. And I extensively used mp4's metatdat for music, even with esoteric Tags.
Not Sure what youre talking abouz.
Arnt · 4h ago
He's saying that in 25 years, you used only the LC and HE profiles, and didn't encounter TwinVQ even once. I looked at my thousand-odd MPEG-4 files. They're overwhelmingly AAC LC, a little bit of AAC LC SBR, no TwinVQ at all.
If you want to check yours: mediainfo **/*.mp4 | grep -A 2 '^Audio' | grep Format | sort | uniq -c
The difference between valid PNG you can't decompress and invalid PNG is fairly irrelevant when your aim is to get an image onto the screen.
And considering we already have plenty of more advanced competing lossless formats, I really don't see why "feed a BMP to deflate" needs a new, incompatible spin in 2025.
More generally, PNG has a simple feature to specify what's needed. A file consists of a number of chunks, and one bit in the chunk specifies whether that chunk is required for display. All of the extensions I've seen in the past decades set that bit to "optional".
For example, this update includes a chunk containing EXIF data. As you'd expect, the exif chunk sets that bit to "optional".
fc417fc802 · 5h ago
> plenty of more advanced competing lossless formats
Other than JXL which still has somewhat spotty support in older software? TIFF comes to mind but AFAIK its size tends to be worse than PNG. Edit: Oh right OpenEXR as well. How widespread is support for that in common end user image viewer software though?
mort96 · 6h ago
> Adding another compression form should be handled by existing software as recognizing it as a valid PNG that they can't decompress.
Yeah, we know. That's terrible.
pvorb · 5h ago
Extending the format just because you can – and breaking backwards compatibility along the way – is even more pointless.
If you've created an extensible file format, but you never need to extend it, you've done everything right, I'd say.
jajko · 5h ago
What about an extensible format that would have as part of header an algorithm (in some recognized DSL) of how to decompress it (or any other step required for image manipulation)? I know its not so much about PNG but some future format.
That's what I would call really extensible, but then there may be no limits and hacking/viruses could have easily a field day.
lelanthran · 5h ago
> What about an extensible format that would have as part of header an algorithm (in some recognized DSL) of how to decompress it (or any other step required for image manipulation)?
Will sooner or later be used to implement RCEs. Even if you could do a restriction as is done for eBPF, that code still has to execute.
Best would be not to extend it.
HelloNurse · 5h ago
Extensibility of PNG has been amply used, as intended, for proprietary chunks that hold application specific data (e.g. PICO-8 games) without bothering other software.
chithanh · 5h ago
> Adding another compression form should be handled by existing software
In an ideal world, yes. In practice however, if some field doesn't change often, then software will start to assume that it never changes, and break when it does.
TLS has learned this the hard way when they discovered that huge numbers of existing web servers have TLS version intolerance. So now TLS 1.2 is forever enshrined in the ClientHello.
dooglius · 5h ago
> Having an extensible format and electing never to extend it seems pointless.
So then it was pointless for PNG to be extensible? Not sure what your argument is.
skywal_l · 7h ago
Can't you improve a compression algorithm and still produce a still valid decompression input? PNG is based on zip, there's certainly ways to improve zip without breaking backwards compatibility.
That being said, they also can do dumb things however, right at the end of the sentence you quote they say:
> we want to make sure we do it right.
So there's hope.
masklinn · 7h ago
> Can't you improve a compression algorithm and still produce a still valid decompression input? PNG is based on zip, there's certainly ways to improve zip without breaking backwards compatibility.
That's just changing an implementation detail of the encoder, and you don't need spec changes for that e.g. there are PNG compressors which support zopfli for extra gains on the DEFLATE (at a non-insignificant cost). This is transparent to the client as the output is still just a DEFLATE stream.
vhcr · 7h ago
That's what OptiPNG already does.
josefx · 2h ago
Doesn't OptiPNG just brute force various settings and pick the best result?
altairprime · 3h ago
They could, for example, use lossy compression for the compatibility layer and then fill it in the rest of the way to lossless using incompatible new compression objects. Legacy uses will see some fidelity degradation, but they are already being stuck with sRGB downmixes, so that’s fine — and those who are bothered by it can just emit a lossless-pixels (but lossy-color and lossy-range) compatibility layer and reserve the compression benefits for the color and dynamic range.
I’m not saying this is what will happen — but if I was able to construct a plausible approach to compression in ten minutes, then perhaps it’s a bit early to predict the doom of compatibility.
mrheosuper · 6h ago
Does usb-c spec break backward compatibility ?, a 2018 macbook work perfectly fine with 2025 usb c charger
danielheath · 6h ago
Some things don't work unless you use the right kind of USB-C cable.
EG your GPU and monitor both have a USB-C port. Plug them together with the right USB cable and you'll get images displayed. Plug them together with the wrong USB cable and you won't.
USB 3 didn't have this issue - every cable worked with every port.
mrheosuper · 6h ago
That is not backward compatible problem. If a cable that does 100w charging when using pd2.0, but only 60w when using with pd3.1 device, then i would agree with you.
yoz-y · 5h ago
The problem is not backward compatibility but labeling. A USB-C cable looks universal but isn’t. Some of them just charge, some do data, some do PD, some give you access to high speed. But there is no way to know.
I believe the problem here is that you will have PNG images that “look” like you can open them but can’t.
voidUpdate · 5h ago
That's not just an issue with usb-c. normal usb a and b cables can have data or no data depending on how stingy the company wants to be, and you can't know until you test it
Xss3 · 4h ago
You can get pretty good guesses just by feel and length. Tiny with a super thin cable? Probably charge only.
mystifyingpoi · 5h ago
Cable labeling could fix 99% of the issues with USB-C compat. The solution should never be blaming consumer for buying the wrong cable. Crappy two-wire charge-only cables are perfectly fine for something like a night desk lamp. Keep the poor cables, they are okay, just tell me if that's the case.
ay · 5h ago
Same thing with PNG. Just call the format with new additions it PNGX, so the user can clearly see that the reason their software can’t display the image is not a file corruption.
This is just pretending that if you have a cat and a dog in two bags and you call it “a bag”, it’s one and the same thing…
lelanthran · 4h ago
> Cable labeling could fix 99% of the issues with USB-C compat.
Labelling is a poor band-aid on the root problem - consumer cables which look identical and fit identically should work wherever they fit.
There should never have been a power-only spec for USB-C socket dimensions.
If a cable supports both power and data, it must fit in all sockets. If a cable supports only power it must not fit into a power and data socket. If a cable supports only data, it should not fit into a power and data socket.
It is possible to have designed the sockets under these constraints, with the caveat that they only go in one way. I feel that that would have been a better trade-off. Making them reversible means that you cannot have a design which enforces cable type.
mystifyingpoi · 4h ago
> If a cable supports only power it must not fit into a power and data socket
That's even more confusing than the current state of affairs. If my phone has power and data socket, then I cannot use power only cable to only charge it? Presumably with the charger that has power only socket. So I need a cable with two different ends anyway. Just go micro-USB at this point :)
Funnily enough, there is a 100% overkill way to solve such issues. Just use super expensive certified TB cables. Well... plus a A-to-C adapter for noncompliant devices, I guess.
Xss3 · 4h ago
So since my vape (example, i dont vape) has a power and data slot for charging and firmware updates, i should be limited to only using dual purpose cables day to day rather than a power only cable?
lelanthran · 3h ago
> So since my vape (example, i dont vape) has a power and data slot for charging and firmware updates, i should be limited to only using dual purpose cables day to day rather than a power only cable?
Well, yes.
Why can't you use a power+data cable for the vape (or whichever appliance takes both)? What's the deal-breaker here?
The alternative is labeling, or plugging cables in to see if they do what you want them to do.
Both are a poor user interface.
mrheosuper · 5h ago
the parent said "changing the compression algorithm will break backwards compatibility", which i assume is something works now won't work in the future. The usb-c spec is intentionally trying to avoid that.
danielheath · 5h ago
Today, I can save a PNG file off a random website and then open it.
If PNG gets extended, it's entirely plausible that someone will view a PNG in their browser, save it, and then not be able to open the file they just saved.
There are those who claim "backwards compatibility" doesn't cover "how you use it" - but roughly none of the people who now have to deal with broken software care about such semantic arguments. It used to work, and now it doesn't.
fc417fc802 · 4h ago
The alternative is the website operator who wants to save on bandwidth instead adopts JXL or WEBP or what have you and ... the end user with old software still can't open it.
It's a dichotomy. Either the provider accommodates users with older software or not. The file extension or internal headers don't change that reality.
Another example, new versions of PDF can adopt all the bells and whistles in the world but I will still be saving anything intended to be long lived as 1/a which means I don't get to use any of those features.
mrheosuper · 5h ago
which is what usb-c spec has been avoiding so far. Even in USB4 spec, there are a lot of mentioning the new spec should be compatible with TB3 devices.
USB-C spec is anything but breaking backward compatible.
johnisgood · 5h ago
This is what I fear, too.
Do they mention which C libraries use this spec?
globular-toast · 5h ago
Some aren't even USB. Thunderbolt and DisplayPort both use USB-C too.
Xss3 · 4h ago
Thunderbolt meets usbc specs (and exceeds them afaik), so it is still usb...
mystifyingpoi · 5h ago
Yeah, I also don't think they've broken backwards compat ever. Super high end charger from 2024 can charge old equipment from 2014 just fine with regular 5V.
What was broken was the promise of a "single cable to rule them all", partly due to manufacturers ignoring the requirements of USB-C (missing resistors or PD chips to negotiate voltages, requiring workarounds with A-to-C adapters), and a myriad of optional stuff, that might be supported or not, without a clear way to indicate it.
techpression · 6h ago
I don’t know if it’s the spec or just a plethora of vendors that ignores it, but I have many things with a USB-C port that requires USB-A as source. USB-C to A to C works, yay dongles, but not just C to C.
So maybe it’s not really breaking backwards compatibility, just a weird mix of a port and the communication being separate standards.
mrheosuper · 6h ago
because those usb-c ports do not follow the spec. If they had followed the spec from 1st day there would be no problem even now.
fragmede · 5h ago
it's vendors just changing the physical port but not updating the electronics. specifically, a 5.1kΩ pull-up resistors on the CC1 and/or CC pins is needed on the host (was usb-a) side in order for the c to c cable to work.
zirgs · 4h ago
Yeah - it's a mess. Some devices only charge with a charger that supports PD. Some other devices need a charger WITHOUT PD support.
ajnin · 3h ago
What backward compatibility are we talking about here? Backwards compatibility of images will be fine, backwards compatibility of decoders might be impacted, but the article says the major image viewers (browsers) and image editors already support the 3rd version. Better compression is only planned for the 5th version of the spec.
Also if you forbid evolving existing formats, the only alternative to improve is to introduce a new format, and I argue that it would be causing even more fragmentation and be more difficult to adopt to. Look at all the drama surrounding JPEG XL.
colanderman · 7h ago
One could imagine a PNG file which contains a low-resolution version of the image with a traditional compression algorithm, and encodes additional higher-resolution detail using a new compression algorithm.
bawolff · 2h ago
I don't think that will super be an issue. How often has "progressive jpeg" ever caused problems? That's the same thing.
bmacho · 6h ago
+1 why not name it png4 or something. It's better if compatibility is obvious upfront
josephg · 6h ago
I think if they did that, nobody would use it. And anyway, from the article:
> Many of the programs you use already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
It might be too late to rename png to .png4 or something. It sounds like we're using the new png standard already in a lot of our software.
LegionMammal978 · 7h ago
Reading the linked blog post on the new cICP chunk type [0], it looks like the "proper HDR support" isn't something that you couldn't already do with an embedded ICC profile, but instead a much-abbreviated form of the colorspace information suitable for small image files.
I'm very curious to see how this will end up stacking up vs lossless jpegxl
Simran-B · 6h ago
I doubt it can get anywhere near. What is even the point of a new PNG version if there's something as advanced as JXL that is also royalty-free?
layer8 · 5h ago
Browser support for JPEG XL is poor (basically only Safari I think), while the new PNG spec is already supported by all mainstream browsers.
encom · 5h ago
It's poor, only because Google is using their stranglehold on browsers, to push their own WebP trash. That company can't get broken up soon enough.
layer8 · 4h ago
Firefox also doesn’t support JPEG XL out of the box, and Chrome does support the new PNG, so ¯\_(ツ)_/¯.
LoganDark · 5h ago
For starters, you're actually able to use PNG.
hrydgard · 5h ago
What about implementations? libpng seems pretty dead, 1.7 has been in development forever but 1.6 is still considered the stable version. Is there a current "canonical" png C/C++ library?
vanderZwan · 5h ago
I mean, if the spec has been stable for two decades then maybe there just hasn't been much to fix? Especially since PNG is a relatively simple image format.
iliketrains · 5h ago
Official support for animations, yes! This feels so nostalgic to me, I have written an L-system generator with support for exporting animated PNGs 11 years ago! They were working only in Firefox, and Chrome used to have an extension for them. Too bad I had to take the website down.
Back then, there were no libraries in C# for it, but it's actually quite easy to make APNG from PNGs directly by writing chunks with correct headers, no encoders needed (assuming PNGs are already encoded as input).
While I welcome that there is now PNG with animations, I am less impressed about how Mozilla chose to push for it.
Using PNG's magic numbers and pretend to existing software that it is just normal PNG? That is the same mindset that lead to HTML becoming tag soup. After all, HTML with a <blink> tag is still HTML, no?
I think they could have achieved animated PNG standardization much faster with a more humble and careful approach.
snickerbockers · 4h ago
It was gone??? Was I the only one using it this entire time?
Dwedit · 5h ago
If you wanted better compression, it's called Lossless WEBP. Lossless WEBP is such a nice codec. Compared with Lossless JXL, it decompresses many times more quickly, and while JXL usually produces a smaller file, it doesn't always.
Lossless AVIF is not competitive.
However, lossless WEBP does not support indexed color images. If you need palettes, you're stuck with PNG for now.
ansgri · 3h ago
How's HDR and high bit depth support? One of the things I liked about JXL is wide range of bit depths and arbitrary number of channels.
rurban · 1h ago
And the JXL api is a nightmare, compared to WEBP.
altairprime · 3h ago
I look forward to seeing what PNG v5 does in the future with compression, especially relative to existing formats.
tonyedgecombe · 6h ago
>After 20 years of stagnation, PNG is back with renewed vigor!
After 20 years of success, we can't resist the temptation to mess with what works.
eviks · 4h ago
> [not] Officially supports Exif data
How can you call this basic fail a success?
tonyedgecombe · 1h ago
Exif data might be important to you but it clearly hasn't stopped the adoption of png.
encom · 5h ago
Yea I'm mildly concerned about this as well. PNG's age is a feature, in a time where software development has gone to hell.
HelloNurse · 5h ago
Without the new HDR and color profile handling, PNG was still useful but significantly obsolete. Display hardware has progressed over a few decades, raising the bar for image files.
virtualritz · 4h ago
There is nothing in display hardware today that TIFF couldn't handle already.
For example 16bit (integer) TIFF files 'with headroom', i.e. where some bits were used to represent data over 1.0 (HDR) was a common approach for VFX work in the 90's.
16bit float TIFF is also thing since 33 years. Adobe DNG is modeled after TIFF. High end offline renderers have traditionally been using TIFF (with mip-maps) to store textures.
TIFF supports tags so primaries and white point or a known color space name can be stored in the file.
The format is so versatile, it is used everywhere.
And of course it also supports indexed color, i.e. a non-negotiable feature at the time PNG was introduced.
PNG was meant to replace GIF. Instead of looking what was already there some group of "experts" and "enthusiasts" (quote Wikipedia) succumbed to their NIH complexes.
If licensing/patent woes over compression algorithms had been a motivator, why not just add a new one to TIFF?
The fact that PNG stores straight/unpremultiplied alpha says everything if you know anything about imaging in computer graphics.
And the fact that the updated format spec just released didn't address this tells you everything you need to know about the group in charge of that, today.
PNG is the VHS of image formats. It should have never seen the light day of in the first place nor the adoption it did.
tonyedgecombe · 3h ago
>The format is so versatile, it is used everywhere.
Yeah, I love the fact that you can embed a PDF file inside a TIFF.
jeroenhd · 4h ago
> Display hardware has progressed
The continued popularity of non-HDR 1080p screens on laptops is a bleak reminder that most people would rather save a couple hundred bucks than buy HDR capable hardware.
HDR is great for TVs and a nice-to-have on phones (who mostly get it for free because OLEDs are the norm these days), but display technology only advances as much as its availability in low-cost devices.
leni536 · 4h ago
PNG already supports color profiles, but probably not HDR. I would say that the gamut argument in the article is misleading, you can already encode a wider gamut.
Not sure how HDR encoding works, but my impression is that you can set a nominal white point other than (1, 1, 1) in your specified colorspace. This is an extension, but orthogonal to specifying the colorspace itself and the gamut.
encom · 5h ago
>Display hardware has progressed
It has, but WWW is still de facto sRGB, and will be for a long time still. But again, I'm not strictly opposed to evolving PNG, I just hope they don't ruin it in the process, because that's usually what happens when something gets update for a modern audience. I'll be watching with mixed optimism and concern.
jeroenhd · 4h ago
Plenty of JPGs on the web are already in HDR and you wouldn't notice it if you don't have a HDR capable display. The same is true for PNGs.
b0a04gl · 5h ago
it's more to do with the obvious economic layer underneath. you give a format new life only if there's tooling and distribution muscle behind it. adobe, apple, chrome, ffmpeg etc may not get aligned at the same time. someone somewhere wants apng/hdr/png to be a standard pipe again for creative chains; maybe because video formats are too bulky for microinteraction or maybe because svg is too unsafe in sandboxed renderers. and think onboarding of animations, embedded previews, rich avatars, system wide thumbs ; all without shipping a separate codec or runtime. every time a 'dead' format comes back, it's usually because someone needed a way around a gate
jbverschoor · 5h ago
What if we kind of fit JXL in PNG? That way it's more likely to be supported
nektro · 6h ago
cautiously optimistic. the thing that makes png so sought after is its status as frozen
meindnoch · 4h ago
Parallel compression/decompression is already possible via Z_SYNC_FLUSH.
guilbep · 5h ago
Let's call it PPNG: Pas Portable NetWork Graphic
leviathan1 · 1h ago
Not backwards compatible I think
kumarvvr · 6h ago
Never heard about Animated PNGs, and I am a nerd to the core.
Pleasantly surprised.
defraudbah · 5h ago
this is good news, any packages who support new png standard or planning to?
rust/go/python/js?
neepi · 5h ago
Oh no another HEIC!
sylware · 3h ago
Until everything new is "optional". Hopefully PNG won't be the target of "enshitification". We all know that for file formats, there is a very strong pressure from developers and vendors for that to happen since it favors, hard, vendor and developer lock-in. If not careful, even with a team of PHD devs won't be able to write alternatives encoders/decoders that "reasonbly" and the world will end-up with very few alternatives implementations, if not only one.
I did skim through the specs, it seems most of it is related to cleanup and optional blocks, so it seems PNG is still safe, am I wrong? (asking those who did dive into the new specs deeply).
Joel_Mckay · 6h ago
DaVinci Resolve also supports OpenEXR format with the added magic of LUT.
PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases.
Even after spending a few grand on seats for a project, one still gets arrogant 3D clown-ware vendors telling people how they should run their pipeline with PNG hot garbage as input.
People should choose EXR more often, and pick a consistent color standard. PNG does not need yet another awful encoding option. =3
morjom · 5h ago
What are some "consistent color standards" you'd recommend? Honest question.
Joel_Mckay · 5h ago
Like all complex questions, the answer is it depends on the target project and or Display.
The calibration workflows also depend heavily on what is being rendered, source application(s), and the desired content look. There were some common free packs on github for popular programs at one time. Should still be around someplace... good luck. =3
DidYaWipe · 5h ago
"PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases."
What are you talking about? It's a bitmap. It has nothing to do with "exposure and color problems."
Joel_Mckay · 4h ago
In general, with some applications people hit the limits pretty quickly with PNG and JPG. In our use-case, the EXR format essentially meant a rendered part of the source image wouldn't be "overexposed" by the render pipeline, and layers could be later adjusted to better match in Resolve. Example: your scenes fireball simulation won't look like a fried egg photo from 1980 due to hitting 0xFF.
If you've never encountered the use-case, than don't worry about the aesthetics. Seriously, many vendors also just don't care... especially after they already were paid. Best of luck =3
antirez · 3h ago
PNG: doing very little with as much complexity as possible.
LeoPanthera · 3h ago
You’re going to be shocked when you find out how webp works.
qwertfisch · 21m ago
Because that’s a video compression format, from where only a single intra-frame is used.
Society doesn't need a new image format. I'd wager to say not any new multimedia format. Big corporate entites do, and have churning them out at a steady pace.
Look at poor webp - a format pushed by the largest industry players - and the abysmal everyday use it gets, and the hate it generates.
They say it's technically compatible since older image decoders should recognize the PNG file is using a different compression algorithm than the default.
> Many programs already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
This is intentionally ignoring the fact that there are countless PNG decoders out in the wild, many using libpng the standard decoder last updated 6 years ago; and they will not be able to read the new PNG v2 files.
They should have used a different file extension, PNG2, to distinguish this incompatible format. Otherwise, users will be confused why their newly saved PNG file cannot be read by certain existing programs.
(I am told by a certain LLM that the first 8 bytes of a PNG are the marker bytes: "89 50 4E 47 0D 0A 1A 0A". This is apparently in libpng itself ... so perhaps any OS or tool updating to a newer pnglib will get the new format for free?)
No comments yet
Estimates are that 95% of Internet users have a browser that supports WebP and that ~25% of the top million websites serve WebP images. I wouldn't call that abysmal.
One example is Sony's SRF camera raw format.
Programs like Photoshop and Affinity have to bring their own decoders where previously none were required.
Having ask that in a slightly confrontational way, one of the reasons I started using VLC all those years ago, and still use it to this day, was having trouble with other media players that relied on OS support fail to work well (or at all) with some codecs, while VLC brought support for them, and their dog, built-in and reliable. Dragging your own format support libraries with you can be beneficial.
Edit: and good luck uploading the format to the majority of webforms that aren’t faang.
In what other industry would it be considered acceptable to exclude 5% of visitors/users/clients?
e.g. cars - not everyone is physically able to drive books - blind people can't read music - deaf people can't hear
It is a form of 80/20 or 90/10 rule the last small percentage costs as much as the majority.
See CSS image-set : https://developer.mozilla.org/en-US/docs/Web/CSS/image/image...
That's not how it works.
The server declares what versions of media it has, and the client requests a supported media format. The same trick have been used for audio and video for ages too.
Example:
But even beyond that, most file formats have a bit of a header at the start of the file that declares the actual format of the file. Browsers already can understand that and use the correct render for a file without an extension.
https://caniuse.com/webp
Note that I'm looking at "all tracked," which excludes 2% "other" browsers in the data whose featureset is not known.
> Many […] programs […] already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
> Plus, you saw some broadcast companies in that list above. Behind the scenes, hardware and tooling are being updated to support the new PNG spec.
Whilst we're at it, please get rid of RGB and make it N channels too.
Libraries can choose to render that into a 3 channel, 8 bit buffer for legacy applications - but the data will be there for CMYK or HDR, or depth maps, or transparency, or focus stacking, or any other future feature!
There is just no need for a PNG update, just adopt JPEG XL.
> PNG is pronounced “ping”
See the end of Section 1 [0]
0: https://www.w3.org/TR/REC-png.pdf
[1] https://edition.cnn.com/2013/05/22/tech/web/pronounce-gif
Not sure I'll bother to reprogram myself from “png”, “pung”, or “pee-enn-gee”.
Probably the best news here. While you already can write custom data into a header, having Exif is good.
BTW: Does Exif have a magnetometer (rotation) and acceleration (gravity) field? I often wonder about why Google isn't saving this information in the images which the camera app saves. It could help so much with post-processing, like with leveling the horizon or creating panoramas.
And is being able to read an image without an opt-in tag something that has to be explicitly enabled in the reference implementation's API?
Old decoders and new decoders now could render an image with exif rotation differently since it's an optional chunk that can be ignored, and even for new decoders, the spec lists no decoder recommendations for how to use the exif rotation
It does say "It is recommended that unless a decoder has independent knowledge of the validity of the Exif data, the data should be considered to be of historical value only.", so hopefully the rotation will not be used by renderers, but it's only a vague recommendation, there's no strict "don't rotate the image" which would be the only backwards compatible way
With jpeg's exif, there have also been bugs with the rotation being applied twice, e.g. desktop environment and underlying library both doing it independently
The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
What are the arguments for this? It would seem easier for everyone to rotate and then store exif for the original rotation if necessary.
If a smartphone camera is doing it, then bad camera app!
It's basically a shame that the exif metadata contains things that affect the rendering
This is particularly important on smartphones and battery operated devices. However, most smartphone devices simply save the photo the same way regardless of orientation, and simply add a display-rotated flag to the metadata.
It can be super annoying sometimes, as one can't really disable the feature on many devices. =3
Exif fields: https://exiv2.org/tags.html
Curious if Animated SVGs are also a thing. I remember seeing some Javascript based SVG animations (it was a animated chatbot avatar) - but not sure if there is any standard framework.
Yes. Relevant animation elements:
• <set>
• <animate>
• <animateTransform>
• <animateMotion>
See https://www.w3schools.com/graphics/svg_animation.asp
https://shkspr.mobi/blog/2025/06/an-annoying-svg-animation-b...
This could possibly be used to build full fledged games like pong and breakout :)
I'm not sure about the tools and DX around animated PNGs. Is that a thing?
Can animated PNG beat av1 or whatever?
[0] like for example these old Windows animations: https://www.randomnoun.com/wp/2013/10/27/windows-shell32-ani...
https://trac.ffmpeg.org/wiki/Encode/AV1#Losslessencoding
The AV1 spec [1] does not allow RGB color spaces, therefore AV1 cannot preserve RGB animations in a bit-identical fashion.
[1] https://aomediacodec.github.io/av1-spec/av1-spec.pdf
Animated PNGs can't beat GIF nevermind video compression algorithms.
Not entirely true, it depends on what's being displayed, see a few simple tests specifically constructed to show how much better APNG can be vs GIF and {,lossy} webp: http://littlesvr.ca/apng/gif_apng_webp.html
Of course I don't think it generalizes all that well…
edit: using the same ezgif webp and apng on a H.264 source, apng is suddenly 10x the size than webp. It seems apng is only better if the source is gif
That's not really true. Some websites lie to you by putting .gif in the address bar but then serving a file of a different type. File extensions are merely a convention and an address isn't a file name to begin with so the browser doesn't care about this attempt at end user deception one way or the other.
Nowadays, AVIF serves that purpose best I think.
SVG is just html5, it has full support for CSS, javascript with buttons, web workers, arbitrary fetch requests, and so on (obviously not supported by image viewers or allowed by browsers).
If you use an <img> tag, svgs are loaded in "restricted" mode. This disables scripting and external resources. However animation via either SMIL or CSS is still supported.
Also while true color gifs seem to be possible it is usually limited to 256 colors per image.
For those reasons alone APNG is much better than GIF.
Assuming Next gen PNG will still require new decoder. They could just call it PNG2.
JPEG-XL already provides everything most people asked for a lossless codec. If there are any problems it is its encoding and decoding speed and resources.
Current champion of Lossless image codec is HALIC. https://news.ycombinator.com/item?id=38990568
"photo scanned in 2025, is about something in easter, before 1940 and after 1920"
For ambiguous dates there is the EDTF Spec[1] which would be nice to see more widely adopted.
[0] https://www.media.mit.edu/pia/Research/deepview/exif.html
[1] https://www.loc.gov/standards/datetime/
Different software reacts in different ways to partial specifications of yyyy/mm/dd such that you can try some of the cute tricks but probably only one s.w. package honours it.
And the majors ignore almost all fields other than a core set of one or two, disagree about their semantics, and also do wierd stuff with file name and atime/mtime.
This worries me. Because presumably, changing the compression algorithm will break backwards compatibility, which means we'll start to see "png" files that aren't actually png files.
It'll be like USB-C but for images.
[1] https://github.com/w3c/png/issues/39#issuecomment-2674690324
https://svgees.us/blog/img/revoy-cICP-bt.2020.png uses the new colour space. If your software and monitor can handle it, you see better colour than I, otherwise, you see what I see.
Now, PNG datatype for AmigaOS will need upgrading.
The main use case for PNG is web browsers and all of them seem to be on board. Using old web browsers is a bad idea. You do get these relics showing up using some old version of internet explorer. But some images not rendering is the least of their problems. The main challenge is actually going to be updating graphics tools to export the new files. And teaching people that sRGB maybe isn't good enough any more. That's going to be hard since most people have no clue about color spaces.
Anyway, that gives everybody plenty of time to upgrade. By the time this stuff is widely used, it will be widely supported. So, you kind of get forward compatibility that way. Your browser already supports the new format. Your image editor probably doesn't.
This is news to me. I'm pretty sure the main use case for PNG is lossless transparent graphics.
The PNG format is specifically designed to allow software to read the parts they can understand and to leave the parts they cannot. Having an extensible format and electing never to extend it seems pointless.
This proves OP analogy regarding USB-C. Having PNG as some generic container for lossless bitmap compression means fragmentation in libraries, hardware support, etc. The reason being that if the container starts to support too many formats, implementations will start restricting to only the subsets the implementers care about.
For instance, almost nobody fully implements MPEG-4 Part 3; the standard includes dozens of distinct codecs. Most software only targets a few profiles of AAC (specifically, the LC and HE profiles), and MPEG-1 Layer 3 audio. Next to no software bothers with e.g. ALS, TwinVQ, or anything else in the specification. Even libavcodec, if I recall correctly, does not implement encoders for MPEG-4 Part 3 formats like TwinVQ. GP's fear is exactly this -- that PNG ends up as a standard too large to fully implement and people have to manually check which subsets are implemented (or used at all).
And now think of the younger generation that has grown up with smartphones and have been trained to not even know what a file is. I remember this story about senior high school students failing their school tests during covid because the school software didn't support heif files and they were changing the file extension to jpg to attempt to convert them.
I have no trust the software ecosystem will adapt. For instance the standard libraries of the .net framework are fossilised in the world of multimedia as of 2008-ish. Don't believe heif is even supported to this day. So that's a whole bunch of code which, unless the developers create workarounds, will never support a newer png format.
But that's typical for file extensions. Consider EXE – it is probably an executable, but an executable for what? Most commonly Windows – but which Windows version will this EXE run on? Maybe this EXE only works on Windows 11, and you are still running Windows 10. Or maybe you are running x86-64 Windows, but this EXE is actually for ARM or MIPS or Alpha. Or maybe it is for some other platform which uses that extension for executable files – such as DOS, OS/2, 16-bit Windows, Windows CE, OpenVMS, TOPS-10, TOPS-20, RSX-11...
.html, .js, .css – suggest to use a web browser, but don't tell you whether they'll work with any particular one. Maybe they use the latest features but you use an old web browser which doesn't support them. Maybe they require deprecated proprietary extensions and so only work on some really old browser. Maybe this HTML page only works on Internet Explorer. Maybe instead of UTF-8 it is in some obscure legacy character set which your browser doesn't support.
.zip – supports extensible compression and encryption methods, your unzip utility might not support the methods used to compress/encrypt this particular zip file. This is actually normal for very old ZIP files (from the 1980s) – early versions of PKZIP used various deprecated compression mechanisms, which few contemporary unzip utilities support. The format was extended to 64-bit without changing the extension, there's still a lot of 32-bit only implementations out there. ZIP also supports platform-specific file attributes–e.g. PKZIP for z/OS creates ZIP files which contain metadata about mainframe data storage formats, unzip on another platform is going to have no idea what it means, but the metadata is actually essential to interpreting the data correctly (e.g. if RECFM=V you need to parse the RDWs, if RECFM=F there won't be any)
.xml - okay, it is XML – but that tells you nothing about the actual schema. Maybe you were expecting this xml file to contain historical stock prices, but instead it is DocBook XML containing product documentation, and your market data viewer app chokes on it. Or maybe it really is historical stock prices, but you are using an old version of the app which doesn't support the new schema, so you can't view it. Or maybe someone generated it on a mainframe, but due to a misconfiguration the file came out in EBCDIC instead of ASCII, and your app doesn't know how to read EBCDIC, yet the mainframe version of the same app reads it fine...
.doc - people assume it is legacy (pre-XML) Microsoft Word: every version of which changed the file format, old versions can't read files created with newer versions correctly or at all, conversely recent versions have dropped support for files created in older versions, e.g. current Office versions can't read DOC files created with Word for DOS any more... but back in the 1980s a lot of people used that extension for plain text files which contained documentation. And it was also used by incompatible proprietary word processors (e.g. IBM DisplayWrite) and also desktop publishing packages (e.g. FrameMaker, Interleaf)
.xmi – I've seen this extension used for both XML Model Interchange (XML-based standard for exchanging UML diagrams) and XMIT (IBM mainframe file archive format). Because extensions aren't guaranteed to be unique, many incompatible file formats share the same extension
.com - is it an MS-DOS program, or is it DCL (Digital Command Language)?
.pic - probably some obscure image format, but there are dozens of possibilities
.img – could be either a disk image or a visual image, either way dozens of incompatible formats which use that extension
.db – nowadays most likely SQLite, but a number of completely incompatible database engines have also used this extension. And even if it is SQLite, maybe your version of SQLite is too old to read this file because it uses some features only found in newer versions. And even if SQLite can read it, maybe it has the wrong schema for your app, or maybe a newer version of the same schema which your old version that app doesn't support, or an old version of the schema which the current version of the app has dropped support for...
Same is also true for the most advanced codecs. MPEG-* family and MP3 comes to my mind.
Nothing stops PNG from defining a "set of decoders", and let implementers loose on that spec to develop encoders which generate valid files. Then developers can go to town with their creativity.
Regarding the potential for fragmentation of the png ecosystem the alternative is a new file format which has all the same support issues. Every time you author something you make a choice between legacy support and using new features.
From a developer perspective, adding support for a new compression type is likely to be much easier than implementing logic for an entirely new format. It's also less surface area for bugs. In terms of libraries, support added to a dependency propagates to all consumers with zero additional effort. Meanwhile adding a new library for a new format is linear effort with respect to the number of programs.
Not Sure what youre talking abouz.
If you want to check yours: mediainfo **/*.mp4 | grep -A 2 '^Audio' | grep Format | sort | uniq -c
https://en.wikipedia.org/wiki/TwinVQ#TwinVQ_in_MPEG-4 tells the story of TwinVQ in MPEG-4.
No comments yet
And considering we already have plenty of more advanced competing lossless formats, I really don't see why "feed a BMP to deflate" needs a new, incompatible spin in 2025.
More generally, PNG has a simple feature to specify what's needed. A file consists of a number of chunks, and one bit in the chunk specifies whether that chunk is required for display. All of the extensions I've seen in the past decades set that bit to "optional".
For example, this update includes a chunk containing EXIF data. As you'd expect, the exif chunk sets that bit to "optional".
Other than JXL which still has somewhat spotty support in older software? TIFF comes to mind but AFAIK its size tends to be worse than PNG. Edit: Oh right OpenEXR as well. How widespread is support for that in common end user image viewer software though?
Yeah, we know. That's terrible.
If you've created an extensible file format, but you never need to extend it, you've done everything right, I'd say.
That's what I would call really extensible, but then there may be no limits and hacking/viruses could have easily a field day.
Will sooner or later be used to implement RCEs. Even if you could do a restriction as is done for eBPF, that code still has to execute.
Best would be not to extend it.
In an ideal world, yes. In practice however, if some field doesn't change often, then software will start to assume that it never changes, and break when it does.
TLS has learned this the hard way when they discovered that huge numbers of existing web servers have TLS version intolerance. So now TLS 1.2 is forever enshrined in the ClientHello.
So then it was pointless for PNG to be extensible? Not sure what your argument is.
That being said, they also can do dumb things however, right at the end of the sentence you quote they say:
> we want to make sure we do it right.
So there's hope.
That's just changing an implementation detail of the encoder, and you don't need spec changes for that e.g. there are PNG compressors which support zopfli for extra gains on the DEFLATE (at a non-insignificant cost). This is transparent to the client as the output is still just a DEFLATE stream.
I’m not saying this is what will happen — but if I was able to construct a plausible approach to compression in ten minutes, then perhaps it’s a bit early to predict the doom of compatibility.
EG your GPU and monitor both have a USB-C port. Plug them together with the right USB cable and you'll get images displayed. Plug them together with the wrong USB cable and you won't.
USB 3 didn't have this issue - every cable worked with every port.
I believe the problem here is that you will have PNG images that “look” like you can open them but can’t.
This is just pretending that if you have a cat and a dog in two bags and you call it “a bag”, it’s one and the same thing…
Labelling is a poor band-aid on the root problem - consumer cables which look identical and fit identically should work wherever they fit.
There should never have been a power-only spec for USB-C socket dimensions.
If a cable supports both power and data, it must fit in all sockets. If a cable supports only power it must not fit into a power and data socket. If a cable supports only data, it should not fit into a power and data socket.
It is possible to have designed the sockets under these constraints, with the caveat that they only go in one way. I feel that that would have been a better trade-off. Making them reversible means that you cannot have a design which enforces cable type.
That's even more confusing than the current state of affairs. If my phone has power and data socket, then I cannot use power only cable to only charge it? Presumably with the charger that has power only socket. So I need a cable with two different ends anyway. Just go micro-USB at this point :)
Funnily enough, there is a 100% overkill way to solve such issues. Just use super expensive certified TB cables. Well... plus a A-to-C adapter for noncompliant devices, I guess.
Well, yes.
Why can't you use a power+data cable for the vape (or whichever appliance takes both)? What's the deal-breaker here?
The alternative is labeling, or plugging cables in to see if they do what you want them to do.
Both are a poor user interface.
If PNG gets extended, it's entirely plausible that someone will view a PNG in their browser, save it, and then not be able to open the file they just saved.
There are those who claim "backwards compatibility" doesn't cover "how you use it" - but roughly none of the people who now have to deal with broken software care about such semantic arguments. It used to work, and now it doesn't.
It's a dichotomy. Either the provider accommodates users with older software or not. The file extension or internal headers don't change that reality.
Another example, new versions of PDF can adopt all the bells and whistles in the world but I will still be saving anything intended to be long lived as 1/a which means I don't get to use any of those features.
USB-C spec is anything but breaking backward compatible.
Do they mention which C libraries use this spec?
What was broken was the promise of a "single cable to rule them all", partly due to manufacturers ignoring the requirements of USB-C (missing resistors or PD chips to negotiate voltages, requiring workarounds with A-to-C adapters), and a myriad of optional stuff, that might be supported or not, without a clear way to indicate it.
Also if you forbid evolving existing formats, the only alternative to improve is to introduce a new format, and I argue that it would be causing even more fragmentation and be more difficult to adopt to. Look at all the drama surrounding JPEG XL.
> Many of the programs you use already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
It might be too late to rename png to .png4 or something. It sounds like we're using the new png standard already in a lot of our software.
[0] https://svgees.us/blog/cICP.html
Back then, there were no libraries in C# for it, but it's actually quite easy to make APNG from PNGs directly by writing chunks with correct headers, no encoders needed (assuming PNGs are already encoded as input).
https://github.com/NightElfik/Malsys/blob/master/src/Malsys....
https://marekfiser.com/projects/malsys-mareks-lsystems/
While I welcome that there is now PNG with animations, I am less impressed about how Mozilla chose to push for it.
Using PNG's magic numbers and pretend to existing software that it is just normal PNG? That is the same mindset that lead to HTML becoming tag soup. After all, HTML with a <blink> tag is still HTML, no?
I think they could have achieved animated PNG standardization much faster with a more humble and careful approach.
Lossless AVIF is not competitive.
However, lossless WEBP does not support indexed color images. If you need palettes, you're stuck with PNG for now.
After 20 years of success, we can't resist the temptation to mess with what works.
How can you call this basic fail a success?
For example 16bit (integer) TIFF files 'with headroom', i.e. where some bits were used to represent data over 1.0 (HDR) was a common approach for VFX work in the 90's.
16bit float TIFF is also thing since 33 years. Adobe DNG is modeled after TIFF. High end offline renderers have traditionally been using TIFF (with mip-maps) to store textures.
TIFF supports tags so primaries and white point or a known color space name can be stored in the file.
The format is so versatile, it is used everywhere.
And of course it also supports indexed color, i.e. a non-negotiable feature at the time PNG was introduced.
PNG was meant to replace GIF. Instead of looking what was already there some group of "experts" and "enthusiasts" (quote Wikipedia) succumbed to their NIH complexes. If licensing/patent woes over compression algorithms had been a motivator, why not just add a new one to TIFF?
The fact that PNG stores straight/unpremultiplied alpha says everything if you know anything about imaging in computer graphics.
And the fact that the updated format spec just released didn't address this tells you everything you need to know about the group in charge of that, today.
PNG is the VHS of image formats. It should have never seen the light day of in the first place nor the adoption it did.
Yeah, I love the fact that you can embed a PDF file inside a TIFF.
The continued popularity of non-HDR 1080p screens on laptops is a bleak reminder that most people would rather save a couple hundred bucks than buy HDR capable hardware.
HDR is great for TVs and a nice-to-have on phones (who mostly get it for free because OLEDs are the norm these days), but display technology only advances as much as its availability in low-cost devices.
Not sure how HDR encoding works, but my impression is that you can set a nominal white point other than (1, 1, 1) in your specified colorspace. This is an extension, but orthogonal to specifying the colorspace itself and the gamut.
It has, but WWW is still de facto sRGB, and will be for a long time still. But again, I'm not strictly opposed to evolving PNG, I just hope they don't ruin it in the process, because that's usually what happens when something gets update for a modern audience. I'll be watching with mixed optimism and concern.
Pleasantly surprised.
I did skim through the specs, it seems most of it is related to cleanup and optional blocks, so it seems PNG is still safe, am I wrong? (asking those who did dive into the new specs deeply).
PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases.
Even after spending a few grand on seats for a project, one still gets arrogant 3D clown-ware vendors telling people how they should run their pipeline with PNG hot garbage as input.
People should choose EXR more often, and pick a consistent color standard. PNG does not need yet another awful encoding option. =3
A very basic rec.709 workflow tutorial:
https://www.youtube.com/watch?v=lf8COHAgHJs
The Andreas Dürr LUT pack:
https://www.youtube.com/watch?v=dDKK54CeXgM
https://cinematiccookie.gumroad.com/l/bseftb?layout=profile
The calibration workflows also depend heavily on what is being rendered, source application(s), and the desired content look. There were some common free packs on github for popular programs at one time. Should still be around someplace... good luck. =3
What are you talking about? It's a bitmap. It has nothing to do with "exposure and color problems."
If you've never encountered the use-case, than don't worry about the aesthetics. Seriously, many vendors also just don't care... especially after they already were paid. Best of luck =3