Parallel compression/decompression is already possible via Z_SYNC_FLUSH.
LeoPanthera · 3h ago
> I know you all immediately wondered, better compression?. We're already working on that.
This worries me. Because presumably, changing the compression algorithm will break backwards compatibility, which means we'll start to see "png" files that aren't actually png files.
It'll be like USB-C but for images.
jillesvangurp · 17m ago
Old PNGs will work just fine. And forward compatibility is much less important.
The main use case for PNG is web browsers and all of them seem to be on board. Using old web browsers is a bad idea. You do get these relics showing up using some old version of internet explorer. But some images not rendering is the least of their problems. The main challenge is actually going to be updating graphics tools to export the new files. And teaching people that sRGB maybe isn't good enough any more. That's going to be hard since most people have no clue about color spaces.
Anyway, that giver everybody plenty of time to upgrade. By the time this stuff is widely used, it will be widely supported. So, you kind of get forward compatibility that way. Your browser already supports the new format. Your image editor probably doesn't.
lifthrasiir · 3h ago
Better compression can also mean a new set of filter methods or a new interlacing algorithm. But yeah, any of them would cause an instant incompatibility. As noted in the relevant issue [1], we will need a new media type at the very least.
I am hopeful whatever better compression doesn't end up multiplying memory requirements, or increase burden on cpu, especially on decompression.
Now, PNG datatype for AmigaOS will need upgrading.
No comments yet
Lerc · 2h ago
It has fields to say what compression is used. Adding another compression form should be handled by existing software as recognizing it as a valid PNG that they can't decompress.
The PNG format is specifically designed to allow software to read the parts they can understand and to leave the parts they cannot. Having an extensible format and electing never to extend it seems pointless.
koito17 · 2h ago
> Having an extensible format and electing never to extend it seems pointless.
This proves OP analogy regarding USB-C. Having PNG as some generic container for lossless bitmap compression means fragmentation in libraries, hardware support, etc. The reason being that if the container starts to support too many formats, implementations will start restricting to only the subsets the implementers care about.
For instance, almost nobody fully implements MPEG-4 Part 3; the standard includes dozens of distinct codecs. Most software only targets a few profiles of AAC (specifically, the LC and HE profiles), and MPEG-1 Layer 3 audio. Next to no software bothers with e.g. ALS, TwinVQ, or anything else in the specification. Even libavcodec, if I recall correctly, does not implement encoders for MPEG-4 Part 3 formats like TwinVQ. GP's fear is exactly this -- that PNG ends up as a standard too large to fully implement and people have to manually check which subsets are implemented (or used at all).
cm2187 · 5m ago
But where the analogy with USB-C is very good is that just like USB-C, there is no way for a user to tell from the look of the port or the file extension what the capabilities are. Which even for a fairly tech savvy user like me is frustrating. I have a bunch of cables, some purchased years ago, how do I know what is fit for what?
And now think of the younger generation that has grown up with smartphones and have been trained to not even know what a file is. I remember this story about senior high school students failing their school tests during covid because the school software didn't support heif files and they were changing the file extension to jpg to attempt to convert them.
I have no trust the software ecosystem will adapt. For instance the standard libraries of the .net framework are fossilised in the world of multimedia as of 2008-ish. Don't believe heif is even supported to this day. So that's a whole bunch of code which, unless the developers create workarounds, will never support a newer png format.
bayindirh · 1h ago
JPEG is no different. Only the decoder is specified. As long as the decoder decodes what you give it to the image you wanted to see, you can implement anything. This is how imgoptim/squash/aerate/dietJPG works. By (ab)using this flexibility.
Same is also true for the most advanced codecs. MPEG-* family and MP3 comes to my mind.
Nothing stops PNG from defining a "set of decoders", and let implementers loose on that spec to develop encoders which generate valid files. Then developers can go to town with their creativity.
cm2187 · 34s ago
Video files aren't a good analogy. Before God placed VLC and ffmpeg on earth, you had to install a galaxy of codecs on your computer to get a chance to read a video file and you could never tell exactly what codec was stored in a container. Unfortunately there is no vlc and ffmpeg for images (I mean there is, the likes of imagemagick, but the vast majority of software doesn't use them).
fc417fc802 · 1h ago
I honestly don't see an issue with the mpeg-4 example.
Regarding the potential for fragmentation of the png ecosystem the alternative is a new file format which has all the same support issues. Every time you author something you make a choice between legacy support and using new features.
From a developer perspective, adding support for a new compression type is likely to be much easier than implementing logic for an entirely new format. It's also less surface area for bugs. In terms of libraries, support added to a dependency propagates to all consumers with zero additional effort. Meanwhile adding a new library for a new format is linear effort with respect to the number of programs.
7bit · 36m ago
I never once in 25 years encountered an issue with an mp4 Container that could Not be solved by installing either the divx or xvid codec. And I extensively used mp4's metatdat for music, even with esoteric Tags.
Not Sure what youre talking abouz.
Arnt · 17m ago
He's saying that in 25 years, you used only the LC and HE profiles, and didn't encounter TwinVQ even once. I looked at my thousand-odd MPEG-4 files. They're overwhelmingly AAC LC, a little bit of AAC LC SBR, no TwinVQ at all.
If you want to check yours: mediainfo **/*.mp4 | grep -A 2 '^Audio' | grep Format | sort | uniq -c
The difference between valid PNG you can't decompress and invalid PNG is fairly irrelevant when your aim is to get an image onto the screen.
And considering we already have plenty of more advanced competing lossless formats, I really don't see why "feed a BMP to deflate" needs a new, incompatible spin in 2025.
fc417fc802 · 53m ago
> plenty of more advanced competing lossless formats
Other than JXL which still has somewhat spotty support in older software? TIFF comes to mind but AFAIK its size tends to be worse than PNG. Edit: Oh right OpenEXR as well. How widespread is support for that in common end user image viewer software though?
pvorb · 1h ago
Extending the format just because you can – and breaking backwards compatibility along the way – is even more pointless.
If you've created an extensible file format, but you never need to extend it, you've done everything right, I'd say.
jajko · 1h ago
What about an extensible format that would have as part of header an algorithm (in some recognized DSL) of how to decompress it (or any other step required for image manipulation)? I know its not so much about PNG but some future format.
That's what I would call really extensible, but then there may be no limits and hacking/viruses could have easily a field day.
lelanthran · 1h ago
> What about an extensible format that would have as part of header an algorithm (in some recognized DSL) of how to decompress it (or any other step required for image manipulation)?
Will sooner or later be used to implement RCEs. Even if you could do a restriction as is done for eBPF, that code still has to execute.
Best would be not to extend it.
mort96 · 2h ago
> Adding another compression form should be handled by existing software as recognizing it as a valid PNG that they can't decompress.
Yeah, we know. That's terrible.
chithanh · 1h ago
> Adding another compression form should be handled by existing software
In an ideal world, yes. In practice however, if some field doesn't change often, then software will start to assume that it never changes, and break when it does.
TLS has learned this the hard way when they discovered that huge numbers of existing web servers have TLS version intolerance. So now TLS 1.2 is forever enshrined in the ClientHello.
HelloNurse · 1h ago
Extensibility of PNG has been amply used, as intended, for proprietary chunks that hold application specific data (e.g. PICO-8 games) without bothering other software.
dooglius · 1h ago
> Having an extensible format and electing never to extend it seems pointless.
So then it was pointless for PNG to be extensible? Not sure what your argument is.
mrheosuper · 2h ago
Does usb-c spec break backward compatibility ?, a 2018 macbook work perfectly fine with 2025 usb c charger
danielheath · 1h ago
Some things don't work unless you use the right kind of USB-C cable.
EG your GPU and monitor both have a USB-C port. Plug them together with the right USB cable and you'll get images displayed. Plug them together with the wrong USB cable and you won't.
USB 3 didn't have this issue - every cable worked with every port.
mrheosuper · 1h ago
That is not backward compatible problem. If a cable that does 100w charging when using pd2.0, but only 60w when using with pd3.1 device, then i would agree with you.
yoz-y · 1h ago
The problem is not backward compatibility but labeling. A USB-C cable looks universal but isn’t. Some of them just charge, some do data, some do PD, some give you access to high speed. But there is no way to know.
I believe the problem here is that you will have PNG images that “look” like you can open them but can’t.
voidUpdate · 1h ago
That's not just an issue with usb-c. normal usb a and b cables can have data or no data depending on how stingy the company wants to be, and you can't know until you test it
Xss3 · 11m ago
You can get pretty good guesses just by feel and length. Tiny with a super thin cable? Probably charge only.
mystifyingpoi · 1h ago
Cable labeling could fix 99% of the issues with USB-C compat. The solution should never be blaming consumer for buying the wrong cable. Crappy two-wire charge-only cables are perfectly fine for something like a night desk lamp. Keep the poor cables, they are okay, just tell me if that's the case.
ay · 1h ago
Same thing with PNG. Just call the format with new additions it PNGX, so the user can clearly see that the reason their software can’t display the image is not a file corruption.
This is just pretending that if you have a cat and a dog in two bags and you call it “a bag”, it’s one and the same thing…
lelanthran · 48m ago
> Cable labeling could fix 99% of the issues with USB-C compat.
Labelling is a poor band-aid on the root problem - consumer cables which look identical and fit identically should work wherever they fit.
There should never have been a power-only spec for USB-C socket dimensions.
If a cable supports both power and data, it must fit in all sockets. If a cable supports only power it must not fit into a power and data socket. If a cable supports only data, it should not fit into a power and data socket.
It is possible to have designed the sockets under these constraints, with the caveat that they only go in one way. I feel that that would have been a better trade-off. Making them reversible means that you cannot have a design which enforces cable type.
Xss3 · 8m ago
So since my vape (example, i dont vape) has a power and data slot for charging and firmware updates, i should be limited to only using dual purpose cables day to day rather than a power only cable?
mrheosuper · 1h ago
the parent said "changing the compression algorithm will break backwards compatibility", which i assume is something works now won't work in the future. The usb-c spec is intentionally trying to avoid that.
danielheath · 1h ago
Today, I can save a PNG file off a random website and then open it.
If PNG gets extended, it's entirely plausible that someone will view a PNG in their browser, save it, and then not be able to open the file they just saved.
There are those who claim "backwards compatibility" doesn't cover "how you use it" - but roughly none of the people who now have to deal with broken software care about such semantic arguments. It used to work, and now it doesn't.
fc417fc802 · 46m ago
The alternative is the website operator who wants to save on bandwidth instead adopts JXL or WEBP or what have you and ... the end user with old software still can't open it.
It's a dichotomy. Either the provider accommodates users with older software or not. The file extension or internal headers don't change that reality.
Another example, new versions of PDF can adopt all the bells and whistles in the world but I will still be saving anything intended to be long lived as 1/a which means I don't get to use any of those features.
mrheosuper · 53m ago
which is what usb-c spec has been avoiding so far. Even in USB4 spec, there are a lot of mentioning the new spec should be compatible with TB3 devices.
USB-C spec is anything but breaking backward compatible.
johnisgood · 1h ago
This is what I fear, too.
Do they mention which C libraries use this spec?
globular-toast · 1h ago
Some aren't even USB. Thunderbolt and DisplayPort both use USB-C too.
Xss3 · 10m ago
Thunderbolt meets usbc specs (and exceeds them afaik), so it is still usb...
mystifyingpoi · 1h ago
Yeah, I also don't think they've broken backwards compat ever. Super high end charger from 2024 can charge old equipment from 2014 just fine with regular 5V.
What was broken was the promise of a "single cable to rule them all", partly due to manufacturers ignoring the requirements of USB-C (missing resistors or PD chips to negotiate voltages, requiring workarounds with A-to-C adapters), and a myriad of optional stuff, that might be supported or not, without a clear way to indicate it.
techpression · 2h ago
I don’t know if it’s the spec or just a plethora of vendors that ignores it, but I have many things with a USB-C port that requires USB-A as source. USB-C to A to C works, yay dongles, but not just C to C.
So maybe it’s not really breaking backwards compatibility, just a weird mix of a port and the communication being separate standards.
mrheosuper · 1h ago
because those usb-c ports do not follow the spec. If they had followed the spec from 1st day there would be no problem even now.
fragmede · 1h ago
it's vendors just changing the physical port but not updating the electronics. specifically, a 5.1kΩ pull-up resistors on the CC1 and/or CC pins is needed on the host (was usb-a) side in order for the c to c cable to work.
skywal_l · 3h ago
Can't you improve a compression algorithm and still produce a still valid decompression input? PNG is based on zip, there's certainly ways to improve zip without breaking backwards compatibility.
That being said, they also can do dumb things however, right at the end of the sentence you quote they say:
> we want to make sure we do it right.
So there's hope.
masklinn · 3h ago
> Can't you improve a compression algorithm and still produce a still valid decompression input? PNG is based on zip, there's certainly ways to improve zip without breaking backwards compatibility.
That's just changing an implementation detail of the encoder, and you don't need spec changes for that e.g. there are PNG compressors which support zopfli for extra gains on the DEFLATE (at a non-insignificant cost). This is transparent to the client as the output is still just a DEFLATE stream.
vhcr · 3h ago
That's what OptiPNG already does.
colanderman · 2h ago
One could imagine a PNG file which contains a low-resolution version of the image with a traditional compression algorithm, and encodes additional higher-resolution detail using a new compression algorithm.
bmacho · 2h ago
+1 why not name it png4 or something. It's better if compatibility is obvious upfront
josephg · 1h ago
I think if they did that, nobody would use it. And anyway, from the article:
> Many of the programs you use already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
It might be too late to rename png to .png4 or something. It sounds like we're using the new png standard already in a lot of our software.
tonyedgecombe · 1h ago
>After 20 years of stagnation, PNG is back with renewed vigor!
After 20 years of success, we can't resist the temptation to mess with what works.
eviks · 1m ago
> [not] Officially supports Exif data
How can you call this basic fail a success?
encom · 1h ago
Yea I'm mildly concerned about this as well. PNG's age is a feature, in a time where software development has gone to hell.
HelloNurse · 1h ago
Without the new HDR and color profile handling, PNG was still useful but significantly obsolete. Display hardware has progressed over a few decades, raising the bar for image files.
virtualritz · 19m ago
There is nothing in display hardware today that TIFF couldn't handle already.
For example 16bit (integer) TIFF files 'with headroom', i.e. where some bits were used to represent data over 1.0 (HDR) was a common approach for VFX work in the 90's.
16bit float TIFF is also thing since 33 years. Adobe DNG is modeled after TIFF. High end offline renderers have traditionally been using TIFF (with mip-maps) to store textures.
TIFF supports tags so primaries and white point or a known color space name can be stored in the file.
The format is so versatile, it is used everywhere.
And of course it also supports indexed color, i.e. a non-negotiable feature at the time PNG was introduced.
PNG was meant to replace GIF. Instead of looking what was already there some group of "experts" and "enthusiasts" (quote Wikipedia) succumbed to their NIH complexes.
If licensing/patent woes over compression algorithms had been a motivator, why not just add a new one to TIFF?
The fact that PNG stores straight/unpremultiplied alpha says everything if you know anything about imaging in computer graphics.
And the fact that the updated format spec just released didn't address this tells you everything you need to know about the group in charge of that, today.
PNG is the VHS of image formats. It should have never seen the light day of in the first place nor the adoption it did.
leni536 · 28m ago
PNG already supports color profiles, but probably not HDR. I would say that the gamut argument in the article is misleading, you can already encode a wider gamut.
Not sure how HDR encoding works, but my impression is that you can set a nominal white point other than (1, 1, 1) in your specified colorspace. This is an extension, but orthogonal to specifying the colorspace itself and the gamut.
jeroenhd · 31m ago
> Display hardware has progressed
The continued popularity of non-HDR 1080p screens on laptops is a bleak reminder that most people would rather save a couple hundred bucks than buy HDR capable hardware.
HDR is great for TVs and a nice-to-have on phones (who mostly get it for free because OLEDs are the norm these days), but display technology only advances as much as its availability in low-cost devices.
encom · 53m ago
>Display hardware has progressed
It has, but WWW is still de facto sRGB, and will be for a long time still. But again, I'm not strictly opposed to evolving PNG, I just hope they don't ruin it in the process, because that's usually what happens when something gets update for a modern audience. I'll be watching with mixed optimism and concern.
jeroenhd · 33m ago
Plenty of JPGs on the web are already in HDR and you wouldn't notice it if you don't have a HDR capable display. The same is true for PNGs.
qwertox · 3h ago
> Officially supports Exif data
Probably the best news here. While you already can write custom data into a header, having Exif is good.
BTW: Does Exif have a magnetometer (rotation) and acceleration (gravity) field? I often wonder about why Google isn't saving this information in the images which the camera app saves. It could help so much with post-processing, like with leveling the horizon or creating panoramas.
Aardwolf · 2h ago
Exif can also cause confusion for how to render the image: should its rotation be applied or not?
Old decoders and new decoders now could render an image with exif rotation differently since it's an optional chunk that can be ignored, and even for new decoders, the spec lists no decoder recommendations for how to use the exif rotation
It does say "It is recommended that unless a decoder has independent knowledge of the validity of the Exif data, the data should be considered to be of historical value only.", so hopefully the rotation will not be used by renderers, but it's only a vague recommendation, there's no strict "don't rotate the image" which would be the only backwards compatible way
With jpeg's exif, there have also been bugs with the rotation being applied twice, e.g. desktop environment and underlying library both doing it independently
DidYaWipe · 59m ago
The stupid thing is that any device with an orientation sensor is still writing images the wrong way and then setting a flag, expecting every viewing application to rotate the image.
The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
mavhc · 20m ago
Because your non-smartphone camera doesn't have enough ram/speed to do that I assume (when in burst mode)
If a smartphone camera is doing it, then bad camera app!
Joel_Mckay · 9m ago
Most modern camera modules have built in hardware codecs like mjpeg, region of interest selection, and frame mirror/flip options.
This is particularly important on smartphones and battery operated devices. However, most smartphone devices simply save the photo the same way regardless of orientation, and simply add a display-rotated flag to the metadata.
It can be super annoying sometimes, as one can't really disable the feature on many devices. =3
andsoitis · 2h ago
There is no standard field to record readouts of a camera's accelerometers or inertial navigation system.
Yes, but websites frequently strip all or almost all Exif data from uploaded images because some fields are used by stalkers to track people down to their real address.
johnisgood · 1h ago
And I strip Exif data, too, intentionally, for similar reasons.
bspammer · 14m ago
That makes sense to me for any image you want to share publicly, but for private images having the location and capture time embedded in the image is incredibly useful.
Dwedit · 53m ago
If you wanted better compression, it's called Lossless WEBP. Lossless WEBP is such a nice codec. Compared with Lossless JXL, it decompresses many times more quickly, and while JXL usually produces a smaller file, it doesn't always.
Lossless AVIF is not competitive.
However, lossless WEBP does not support indexed color images. If you need palettes, you're stuck with PNG for now.
albert_e · 3h ago
So animated GIFs can be replaced by Animated PNGs with alpha blending with transparent backgrounds and lossless compression! Some nostalgia from 2000s websites can be revived and relived :)
Curious if Animated SVGs are also a thing. I remember seeing some Javascript based SVG animations (it was a animated chatbot avatar) - but not sure if there is any standard framework.
This could possibly be used to build full fledged games like pong and breakout :)
mattigames · 2h ago
Overshadowed by CSS animations for almost all use cases.
lawik · 2h ago
But animated gradient outlines on text is the only use-case I care about.
riffraff · 3h ago
I was under the impression many gifs these days are actually served as soundless videos, as those basically compress better.
Can animated PNG beat av1 or whatever?
layer8 · 1h ago
APNG would be for lossless compression, and probably especially for animations without a constant frame rate. Similar to the original GIF format, with APNG you explicitly specify the duration of each individual frame, and you can also explicitly specify looping. This isn’t for video, it’s more for Flash-style animations, animated logos/icons [0], or UI screen recordings.
All valid points, however AV1 also supports lossless compression and is almost certainly going to win the file size competition against APNG every time.
> many gifs these days are actually served as soundless videos
That's not really true. Some websites lie to you by putting .gif in the address bar but then serving a file of a different type. File extensions are merely a convention and an address isn't a file name to begin with so the browser doesn't care about this attempt at end user deception one way or the other.
josephg · 1h ago
I doubt it, given png is a lossless compression format. For video thats almost never what you want.
DidYaWipe · 56m ago
For animations with lots of regions of solid color it could do very well.
armada651 · 1h ago
> Can animated PNG beat av1 or whatever?
Animated PNGs can't beat GIF nevermind video compression algorithms.
Aissen · 1m ago
> Animated PNGs can't beat GIF nevermind video compression algorithms.
Not entirely true, it depends on what's being displayed, see a few simple tests specifically constructed to show how much better APNG can be:
http://littlesvr.ca/apng/gif_apng_webp.html
Of course I don't think it generalizes all that well…
jeroenhd · 22m ago
Once you add more than 256 different colours in total, GIF explodes in terms of file size. It's great for small, compact images with limited colour information, but it can't compete with APNG when the image becomes more detailed than what you'd find on Geocities.
bmacho · 1h ago
> Curious if Animated SVGs are also a thing.
SVG is just html5, it has full support for CSS, javascript with buttons, web workers, arbitrary fetch requests, and so on (obviously not supported by image viewers or allowed by browsers).
chithanh · 1h ago
When it comes to converting small video snippets to animated graphics, I think WEBP was much better than APNG from the beginning. Only if you use GIF as intermediate format then APNG was competitive.
Nowadays, AVIF serves that purpose best I think.
jonhohle · 1h ago
It seems crazy to think about, but I interviewed with a power company in 2003 that was building a web app with animated SVGs.
jokoon · 27m ago
both GIF and PNG use zipping for compressing data, so APNG are not much better than GIF
ggm · 3h ago
Somebody needs to manage human time/date approximates in a way other people in s/w will align to.
"photo scanned in 2025, is about something in easter, before 1940 and after 1920"
luguenth · 1h ago
In EXIF, you have DateTimeDigitized [0]
For ambiguous dates there is the EDTF Spec[1] which would be nice to see more widely adopted.
I remember reading about this in a web forum mainly for dublin core fanatics. Metadata is fascinating.
Different software reacts in different ways to partial specifications of yyyy/mm/dd such that you can try some of the cute tricks but probably only one s.w. package honours it.
And the majors ignore almost all fields other than a core set of one or two, disagree about their semantics, and also do wierd stuff with file name and atime/mtime.
SchemaLoad · 1h ago
The issue that gets me is that Google Photos and Apple photos will let you manually pick a date, but they won't actually set it in the photo EXIF, so when you move platforms. All of the images that came from scans/sent without EXIF lose their dates.
ggm · 1h ago
It's in sidecar files. Takeout gets them, some tools read them.
LegionMammal978 · 3h ago
Reading the linked blog post on the new cICP chunk type [0], it looks like the "proper HDR support" isn't something that you couldn't already do with an embedded ICC profile, but instead a much-abbreviated form of the colorspace information suitable for small image files.
Official support for animations, yes! This feels so nostalgic to me, I have written an L-system generator with support for exporting animated PNGs 11 years ago! They were working only in Firefox, and Chrome used to have an extension for them. Too bad I had to take the website down.
Back then, there were no libraries in C# for it, but it's actually quite easy to make APNG from PNGs directly by writing chunks with correct headers, no encoders needed (assuming PNGs are already encoded as input).
While I welcome that there is now PNG with animations, I am less impressed about how Mozilla chose to push for it.
Using PNG's magic numbers and pretend to existing software that it is just normal PNG? That is the same mindset that lead to HTML becoming tag soup. After all, HTML with a <blink> tag is still HTML, no?
I think they could have achieved animated PNG standardization much faster with a more humble and careful approach.
snickerbockers · 34m ago
It was gone??? Was I the only one using it this entire time?
hrydgard · 1h ago
What about implementations? libpng seems pretty dead, 1.7 has been in development forever but 1.6 is still considered the stable version. Is there a current "canonical" png C/C++ library?
vanderZwan · 55m ago
I mean, if the spec has been stable for two decades then maybe there just hasn't been much to fix? Especially since PNG is a relatively simple image format.
adgjlsfhk1 · 3h ago
I'm very curious to see how this will end up stacking up vs lossless jpegxl
Simran-B · 2h ago
I doubt it can get anywhere near. What is even the point of a new PNG version if there's something as advanced as JXL that is also royalty-free?
layer8 · 1h ago
Browser support for JPEG XL is poor (basically only Safari I think), while the new PNG spec is already supported by all mainstream browsers.
encom · 1h ago
It's poor, only because Google is using their stranglehold on browsers, to push their own WebP trash. That company can't get broken up soon enough.
layer8 · 40m ago
Firefox also doesn’t support JPEG XL out of the box, and Chrome does support the new PNG, so ¯\_(ツ)_/¯.
LoganDark · 58m ago
For starters, you're actually able to use PNG.
b0a04gl · 1h ago
it's more to do with the obvious economic layer underneath. you give a format new life only if there's tooling and distribution muscle behind it. adobe, apple, chrome, ffmpeg etc may not get aligned at the same time. someone somewhere wants apng/hdr/png to be a standard pipe again for creative chains; maybe because video formats are too bulky for microinteraction or maybe because svg is too unsafe in sandboxed renderers. and think onboarding of animations, embedded previews, rich avatars, system wide thumbs ; all without shipping a separate codec or runtime. every time a 'dead' format comes back, it's usually because someone needed a way around a gate
jbverschoor · 1h ago
What if we kind of fit JXL in PNG? That way it's more likely to be supported
guilbep · 1h ago
Let's call it PPNG: Pas Portable NetWork Graphic
nektro · 2h ago
cautiously optimistic. the thing that makes png so sought after is its status as frozen
defraudbah · 1h ago
this is good news, any packages who support new png standard or planning to?
rust/go/python/js?
kumarvvr · 2h ago
Never heard about Animated PNGs, and I am a nerd to the core.
Pleasantly surprised.
neepi · 1h ago
Oh no another HEIC!
Joel_Mckay · 2h ago
DaVinci Resolve also supports OpenEXR format with the added magic of LUT.
PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases.
Even after spending a few grand on seats for a project, one still gets arrogant 3D clown-ware vendors telling people how they should run their pipeline with PNG hot garbage as input.
People should choose EXR more often, and pick a consistent color standard. PNG does not need yet another awful encoding option. =3
morjom · 1h ago
What are some "consistent color standards" you'd recommend? Honest question.
Joel_Mckay · 1h ago
Like all complex questions, the answer is it depends on the target project and or Display.
The calibration workflows also depend heavily on what is being rendered, source application(s), and the desired content look. There were some common free packs on github for popular programs at one time. Should still be around someplace... good luck. =3
DidYaWipe · 54m ago
"PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases."
What are you talking about? It's a bitmap. It has nothing to do with "exposure and color problems."
Joel_Mckay · 27m ago
In general, with some applications people hit the limits pretty quickly with PNG and JPG. In our use-case, the EXR format essentially meant a rendered part of the source image wouldn't be "overexposed" by the render pipeline, and layers could be later adjusted to better match in Resolve. Example: your scenes fireball simulation won't look like a fried egg photo from 1980 due to hitting 0xFF.
If you've never encountered the use-case, than don't worry about the aesthetics. Seriously, many vendors also just don't care... especially after they already were paid. Best of luck =3
This worries me. Because presumably, changing the compression algorithm will break backwards compatibility, which means we'll start to see "png" files that aren't actually png files.
It'll be like USB-C but for images.
The main use case for PNG is web browsers and all of them seem to be on board. Using old web browsers is a bad idea. You do get these relics showing up using some old version of internet explorer. But some images not rendering is the least of their problems. The main challenge is actually going to be updating graphics tools to export the new files. And teaching people that sRGB maybe isn't good enough any more. That's going to be hard since most people have no clue about color spaces.
Anyway, that giver everybody plenty of time to upgrade. By the time this stuff is widely used, it will be widely supported. So, you kind of get forward compatibility that way. Your browser already supports the new format. Your image editor probably doesn't.
[1] https://github.com/w3c/png/issues/39#issuecomment-2674690324
https://svgees.us/blog/img/revoy-cICP-bt.2020.png uses the new colour space. If your software and monitor can handle it, you see better colour than I, otherwise, you see what I see.
Now, PNG datatype for AmigaOS will need upgrading.
No comments yet
The PNG format is specifically designed to allow software to read the parts they can understand and to leave the parts they cannot. Having an extensible format and electing never to extend it seems pointless.
This proves OP analogy regarding USB-C. Having PNG as some generic container for lossless bitmap compression means fragmentation in libraries, hardware support, etc. The reason being that if the container starts to support too many formats, implementations will start restricting to only the subsets the implementers care about.
For instance, almost nobody fully implements MPEG-4 Part 3; the standard includes dozens of distinct codecs. Most software only targets a few profiles of AAC (specifically, the LC and HE profiles), and MPEG-1 Layer 3 audio. Next to no software bothers with e.g. ALS, TwinVQ, or anything else in the specification. Even libavcodec, if I recall correctly, does not implement encoders for MPEG-4 Part 3 formats like TwinVQ. GP's fear is exactly this -- that PNG ends up as a standard too large to fully implement and people have to manually check which subsets are implemented (or used at all).
And now think of the younger generation that has grown up with smartphones and have been trained to not even know what a file is. I remember this story about senior high school students failing their school tests during covid because the school software didn't support heif files and they were changing the file extension to jpg to attempt to convert them.
I have no trust the software ecosystem will adapt. For instance the standard libraries of the .net framework are fossilised in the world of multimedia as of 2008-ish. Don't believe heif is even supported to this day. So that's a whole bunch of code which, unless the developers create workarounds, will never support a newer png format.
Same is also true for the most advanced codecs. MPEG-* family and MP3 comes to my mind.
Nothing stops PNG from defining a "set of decoders", and let implementers loose on that spec to develop encoders which generate valid files. Then developers can go to town with their creativity.
Regarding the potential for fragmentation of the png ecosystem the alternative is a new file format which has all the same support issues. Every time you author something you make a choice between legacy support and using new features.
From a developer perspective, adding support for a new compression type is likely to be much easier than implementing logic for an entirely new format. It's also less surface area for bugs. In terms of libraries, support added to a dependency propagates to all consumers with zero additional effort. Meanwhile adding a new library for a new format is linear effort with respect to the number of programs.
Not Sure what youre talking abouz.
If you want to check yours: mediainfo **/*.mp4 | grep -A 2 '^Audio' | grep Format | sort | uniq -c
https://en.wikipedia.org/wiki/TwinVQ#TwinVQ_in_MPEG-4 tells the story of TwinVQ in MPEG-4.
And considering we already have plenty of more advanced competing lossless formats, I really don't see why "feed a BMP to deflate" needs a new, incompatible spin in 2025.
Other than JXL which still has somewhat spotty support in older software? TIFF comes to mind but AFAIK its size tends to be worse than PNG. Edit: Oh right OpenEXR as well. How widespread is support for that in common end user image viewer software though?
If you've created an extensible file format, but you never need to extend it, you've done everything right, I'd say.
That's what I would call really extensible, but then there may be no limits and hacking/viruses could have easily a field day.
Will sooner or later be used to implement RCEs. Even if you could do a restriction as is done for eBPF, that code still has to execute.
Best would be not to extend it.
Yeah, we know. That's terrible.
In an ideal world, yes. In practice however, if some field doesn't change often, then software will start to assume that it never changes, and break when it does.
TLS has learned this the hard way when they discovered that huge numbers of existing web servers have TLS version intolerance. So now TLS 1.2 is forever enshrined in the ClientHello.
So then it was pointless for PNG to be extensible? Not sure what your argument is.
EG your GPU and monitor both have a USB-C port. Plug them together with the right USB cable and you'll get images displayed. Plug them together with the wrong USB cable and you won't.
USB 3 didn't have this issue - every cable worked with every port.
I believe the problem here is that you will have PNG images that “look” like you can open them but can’t.
This is just pretending that if you have a cat and a dog in two bags and you call it “a bag”, it’s one and the same thing…
Labelling is a poor band-aid on the root problem - consumer cables which look identical and fit identically should work wherever they fit.
There should never have been a power-only spec for USB-C socket dimensions.
If a cable supports both power and data, it must fit in all sockets. If a cable supports only power it must not fit into a power and data socket. If a cable supports only data, it should not fit into a power and data socket.
It is possible to have designed the sockets under these constraints, with the caveat that they only go in one way. I feel that that would have been a better trade-off. Making them reversible means that you cannot have a design which enforces cable type.
If PNG gets extended, it's entirely plausible that someone will view a PNG in their browser, save it, and then not be able to open the file they just saved.
There are those who claim "backwards compatibility" doesn't cover "how you use it" - but roughly none of the people who now have to deal with broken software care about such semantic arguments. It used to work, and now it doesn't.
It's a dichotomy. Either the provider accommodates users with older software or not. The file extension or internal headers don't change that reality.
Another example, new versions of PDF can adopt all the bells and whistles in the world but I will still be saving anything intended to be long lived as 1/a which means I don't get to use any of those features.
USB-C spec is anything but breaking backward compatible.
Do they mention which C libraries use this spec?
What was broken was the promise of a "single cable to rule them all", partly due to manufacturers ignoring the requirements of USB-C (missing resistors or PD chips to negotiate voltages, requiring workarounds with A-to-C adapters), and a myriad of optional stuff, that might be supported or not, without a clear way to indicate it.
That being said, they also can do dumb things however, right at the end of the sentence you quote they say:
> we want to make sure we do it right.
So there's hope.
That's just changing an implementation detail of the encoder, and you don't need spec changes for that e.g. there are PNG compressors which support zopfli for extra gains on the DEFLATE (at a non-insignificant cost). This is transparent to the client as the output is still just a DEFLATE stream.
> Many of the programs you use already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
It might be too late to rename png to .png4 or something. It sounds like we're using the new png standard already in a lot of our software.
After 20 years of success, we can't resist the temptation to mess with what works.
How can you call this basic fail a success?
For example 16bit (integer) TIFF files 'with headroom', i.e. where some bits were used to represent data over 1.0 (HDR) was a common approach for VFX work in the 90's.
16bit float TIFF is also thing since 33 years. Adobe DNG is modeled after TIFF. High end offline renderers have traditionally been using TIFF (with mip-maps) to store textures.
TIFF supports tags so primaries and white point or a known color space name can be stored in the file.
The format is so versatile, it is used everywhere.
And of course it also supports indexed color, i.e. a non-negotiable feature at the time PNG was introduced.
PNG was meant to replace GIF. Instead of looking what was already there some group of "experts" and "enthusiasts" (quote Wikipedia) succumbed to their NIH complexes. If licensing/patent woes over compression algorithms had been a motivator, why not just add a new one to TIFF?
The fact that PNG stores straight/unpremultiplied alpha says everything if you know anything about imaging in computer graphics.
And the fact that the updated format spec just released didn't address this tells you everything you need to know about the group in charge of that, today.
PNG is the VHS of image formats. It should have never seen the light day of in the first place nor the adoption it did.
Not sure how HDR encoding works, but my impression is that you can set a nominal white point other than (1, 1, 1) in your specified colorspace. This is an extension, but orthogonal to specifying the colorspace itself and the gamut.
The continued popularity of non-HDR 1080p screens on laptops is a bleak reminder that most people would rather save a couple hundred bucks than buy HDR capable hardware.
HDR is great for TVs and a nice-to-have on phones (who mostly get it for free because OLEDs are the norm these days), but display technology only advances as much as its availability in low-cost devices.
It has, but WWW is still de facto sRGB, and will be for a long time still. But again, I'm not strictly opposed to evolving PNG, I just hope they don't ruin it in the process, because that's usually what happens when something gets update for a modern audience. I'll be watching with mixed optimism and concern.
Probably the best news here. While you already can write custom data into a header, having Exif is good.
BTW: Does Exif have a magnetometer (rotation) and acceleration (gravity) field? I often wonder about why Google isn't saving this information in the images which the camera app saves. It could help so much with post-processing, like with leveling the horizon or creating panoramas.
Old decoders and new decoders now could render an image with exif rotation differently since it's an optional chunk that can be ignored, and even for new decoders, the spec lists no decoder recommendations for how to use the exif rotation
It does say "It is recommended that unless a decoder has independent knowledge of the validity of the Exif data, the data should be considered to be of historical value only.", so hopefully the rotation will not be used by renderers, but it's only a vague recommendation, there's no strict "don't rotate the image" which would be the only backwards compatible way
With jpeg's exif, there have also been bugs with the rotation being applied twice, e.g. desktop environment and underlying library both doing it independently
The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
If a smartphone camera is doing it, then bad camera app!
This is particularly important on smartphones and battery operated devices. However, most smartphone devices simply save the photo the same way regardless of orientation, and simply add a display-rotated flag to the metadata.
It can be super annoying sometimes, as one can't really disable the feature on many devices. =3
Exif fields: https://exiv2.org/tags.html
Lossless AVIF is not competitive.
However, lossless WEBP does not support indexed color images. If you need palettes, you're stuck with PNG for now.
Curious if Animated SVGs are also a thing. I remember seeing some Javascript based SVG animations (it was a animated chatbot avatar) - but not sure if there is any standard framework.
Yes. Relevant animation elements:
• <set>
• <animate>
• <animateTransform>
• <animateMotion>
See https://www.w3schools.com/graphics/svg_animation.asp
https://shkspr.mobi/blog/2025/06/an-annoying-svg-animation-b...
This could possibly be used to build full fledged games like pong and breakout :)
Can animated PNG beat av1 or whatever?
[0] like for example these old Windows animations: https://www.randomnoun.com/wp/2013/10/27/windows-shell32-ani...
https://trac.ffmpeg.org/wiki/Encode/AV1#Losslessencoding
That's not really true. Some websites lie to you by putting .gif in the address bar but then serving a file of a different type. File extensions are merely a convention and an address isn't a file name to begin with so the browser doesn't care about this attempt at end user deception one way or the other.
Animated PNGs can't beat GIF nevermind video compression algorithms.
Not entirely true, it depends on what's being displayed, see a few simple tests specifically constructed to show how much better APNG can be: http://littlesvr.ca/apng/gif_apng_webp.html
Of course I don't think it generalizes all that well…
SVG is just html5, it has full support for CSS, javascript with buttons, web workers, arbitrary fetch requests, and so on (obviously not supported by image viewers or allowed by browsers).
Nowadays, AVIF serves that purpose best I think.
"photo scanned in 2025, is about something in easter, before 1940 and after 1920"
For ambiguous dates there is the EDTF Spec[1] which would be nice to see more widely adopted.
[0] https://www.media.mit.edu/pia/Research/deepview/exif.html
[1] https://www.loc.gov/standards/datetime/
Different software reacts in different ways to partial specifications of yyyy/mm/dd such that you can try some of the cute tricks but probably only one s.w. package honours it.
And the majors ignore almost all fields other than a core set of one or two, disagree about their semantics, and also do wierd stuff with file name and atime/mtime.
[0] https://svgees.us/blog/cICP.html
Back then, there were no libraries in C# for it, but it's actually quite easy to make APNG from PNGs directly by writing chunks with correct headers, no encoders needed (assuming PNGs are already encoded as input).
https://github.com/NightElfik/Malsys/blob/master/src/Malsys....
https://marekfiser.com/projects/malsys-mareks-lsystems/
While I welcome that there is now PNG with animations, I am less impressed about how Mozilla chose to push for it.
Using PNG's magic numbers and pretend to existing software that it is just normal PNG? That is the same mindset that lead to HTML becoming tag soup. After all, HTML with a <blink> tag is still HTML, no?
I think they could have achieved animated PNG standardization much faster with a more humble and careful approach.
Pleasantly surprised.
PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases.
Even after spending a few grand on seats for a project, one still gets arrogant 3D clown-ware vendors telling people how they should run their pipeline with PNG hot garbage as input.
People should choose EXR more often, and pick a consistent color standard. PNG does not need yet another awful encoding option. =3
A very basic rec.709 workflow tutorial:
https://www.youtube.com/watch?v=lf8COHAgHJs
The Andreas Dürr LUT pack:
https://www.youtube.com/watch?v=dDKK54CeXgM
https://cinematiccookie.gumroad.com/l/bseftb?layout=profile
The calibration workflows also depend heavily on what is being rendered, source application(s), and the desired content look. There were some common free packs on github for popular programs at one time. Should still be around someplace... good luck. =3
What are you talking about? It's a bitmap. It has nothing to do with "exposure and color problems."
If you've never encountered the use-case, than don't worry about the aesthetics. Seriously, many vendors also just don't care... especially after they already were paid. Best of luck =3