FFmpeg is quite an amazing program. There’s a certain smugness that comes with being involved with it. That can lead to a bit of complacency followed by shock when realizing that you’re not as good as you thought you were.
That happened to me recently when I realized the official libtheora decoder is significantly more performant than FFmpeg’s Theora decoder. I suddenly wondered if this was true in any other departments, i.e., if FFmpeg is slower than other open source libraries that are dedicated to a single purpose. Why do I care? Because I started to wonder if FFmpeg would simply come to be known as the gcc of multimedia processing.
Is it good or bad to be compared to gcc in this way? Depends; gcc has its pros and cons. A colleague once succinctly summarized these trade-offs thusly: “You can generally count on gcc to generate adequate code for just about any platform.” Some free software fans grow indignant when repeated benchmarks unequivocally show, e.g., Intel’s proprietary compilers slaughtering gcc’s various versions. But what do you expect? gcc spreads the effort around to all kinds of chips while Intel largely focuses on generating code for chips that they invented and know better than anyone else. Frankly, I’ve always admired gcc for being able to do as well as it does.
But does it have to be that way with FFmpeg? “You can generally count on FFmpeg to be able to decode a particular format fairly quickly and encode to a wide variety of formats with reasonable quality.” That’s certainly the case currently regarding Theora (it can decode the format, just not as efficiently as libtheora). What about some other notable formats? I think some tests are in order.