I can’t imagine the scale that FFMPEG operates at. A small improvement has to be thousands and thousands of hours of compute saved. Insanely useful project.
prisenco · 1h ago
Their commitment to performance is a beautiful thing.
Imagine all projects were similarly committed.
therealmarv · 1m ago
like Slack or Jira... lol.
Almondsetat · 5m ago
Yeah no, I'd like non-performance critical programs to focus on other things than performance thank you
byteknight · 1h ago
Seems so easy! You only need the entire world even tangentially related to video to rely solely on your project for a task and you too can have all the developers you need to work on performance!
ackfoobar · 39m ago
I seem to recall that they lamented on twitter the low amount of (monetary or code) contribution they got, despite how heavily they are used.
hluska · 8m ago
You know friend, if open source actually worked like that I wouldn’t be so allergic to releasing projects. But it doesn’t - a large swath of the economy depends on unpaid labour being treated poorly by people who won’t or can’t contribute.
zahlman · 1h ago
It'd be nice, though, to have a proper API (in the traditional sense, not SaaS) instead of having to figure out these command lines in what's practically its own programming language....
codys · 1h ago
FFMpeg does have an API. It ships a few libraries (libavcodec, libavformat, and others) which expose a C api that is used in the ffmpeg command line tool.
Don't know how I overlooked that, thanks. Maybe because the one Python wrapper I know about is generating command lines and making subprocess calls.
Wowfunhappy · 30m ago
They're relatively low level APIs. Great if you're a C developer, but for python just calling the command line probably does make more sense.
javier2 · 31m ago
If you are processing user data, the subprocess approach makes it easier to handle bogus or corrupt data. If something is off, you can just kill the subprocess. If something is wrong with the linked C api, it can be harder to handle predictably.
xxpor · 16m ago
I get why the CLI is so complicated, but I will say AI has been great at figuring out what I need to run given an English language input. It's been one of the highest value uses of AI for me.
What is the actual process of identifying hotspots caused suboptimal compiler generated assembly?
Would it ever make sense to write handwritten compiler intermediate representation like LLVM IR instead of architecture-specific assembly?
molticrystal · 49m ago
It would be interesting to look into this to see if anybody has every hand tuned LLVM IR.
My best guess is you were doing codegen for several different instruction sets and the optimization or side channel prevention is something that would be too difficult or specialized to automate so you have to do it by hand.
abhisek · 24m ago
Love it. Thanks for taking the time to write this. Hope it will encourage more folks to contribute.
nisten · 32m ago
I feel like I just got a 3 page intro to autism.
It's glorious.
Alifatisk · 1h ago
How do they make these assembly instructions portable across different cpus?
CannotCarrot · 1h ago
I think there's a generic C fallback, which can also serve as a baseline. But for the big (targeted) architectures, there one handwritten assembly version per arch.
faluzure · 41m ago
Yup.
On startup, it runs cpuid and assigns each operation the most optimal function pointer for that architecture.
In addition to things like ‘supports avx’ or ‘supports sse4’ some operations even have more explicit checks like ‘is a fifth generation celeron’. The level of optimization in that case was optimizing around the cache architecture on the cpu iirc.
Source: I did some dirty things with chromes native client and ffmpeg 10 years ago.
Yeah, sure. I was specifically referring to the tutorials. Ffmpeg needs to run everywhere, although I believe they are more concerned about data center hardware than consumer hardware. So probably also stuff like power pc.
ngcc_hk · 1h ago
More interesting than I thought it could be. A domain specific tutorial is so much better.
sylware · 1h ago
There is serious abuse of nasm macro-preprocessor. Going to be tough to move away to another assembler.
Imagine all projects were similarly committed.
They publish doxygen generated documentation for the APIs, available here: https://ffmpeg.org/doxygen/trunk/
Would it ever make sense to write handwritten compiler intermediate representation like LLVM IR instead of architecture-specific assembly?
My best guess is you were doing codegen for several different instruction sets and the optimization or side channel prevention is something that would be too difficult or specialized to automate so you have to do it by hand.
It's glorious.
On startup, it runs cpuid and assigns each operation the most optimal function pointer for that architecture.
In addition to things like ‘supports avx’ or ‘supports sse4’ some operations even have more explicit checks like ‘is a fifth generation celeron’. The level of optimization in that case was optimizing around the cache architecture on the cpu iirc.
Source: I did some dirty things with chromes native client and ffmpeg 10 years ago.
https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/x86/x...