Monthly Archives: June 2008

YUV4MPEG2 Origin

Does anyone know where the YUV4MPEG2 format comes from? If you google for it, you will find the afore-linked MultimediaWiki page. Does that imply that multimedia.cx, in conjuntion with FFmpeg, control the ‘standard’, such as it is?

The reason I am wondering is that I have some experiments I wish to perform that involve raw video data that does not necessarily conform to any of the formats already enumerated in Y4M’s wiki description.

FATE Compiler Updates

It’s compiler upgrade time for FATE. That means upgrading the gcc SVN snapshots for each of the platforms. Also, gcc 4.3.1 was recently released and using it fixes the FFmpeg regression suite that was previously broken on x86_64 when compiling with gcc 4.3.0. I am also taking this opportunity to upgrade the Intel C compiler to the latest (10.1.015). This is still in process and I hope it won’t be too painful (unpacking and installing a proprietary system packaged inside an RPM on a DEB-based system; managing licenses afterwards).

Also, all of the configurations now flex a wider array of options: –disable-debug –enable-gpl –enable-postproc –enable-avfilter –enable-avfilter-lavf –enable-shared. –disable-debug really keeps the build size down (handy since I have allotted precious little disk space to these VM build appliances). –enable-shared ensures that FATE is now testing shared library functionality. Further, libswcale is built separately in each command line, though not configured into the main FFmpeg build– I have been advised that doing so would louse the regression tests.

Anyway, this is all greatly facilitated by the fact that I finally got around to upgrading my private admin script so that I can actually edit build configurations through a web interface. It’s quite arduous to maintain this stuff through the MySQL command line console.

Oh wow– I just noticed that even the gcc-SVN build for x86_64 passes the regression suite. Good stuff.

The Downside Of Contributions

The prolific Jeff Atwood has a blog post entitled Don’t Go Dark which describes the issue of programmers retreating into their chambers for months on end to create the perfect feature; at the conclusion, said programmers drop the feature on the community at large hoping for its immediate and wholesale incorporation into the project’s mainline codebase. As you can imagine, FFmpeg lends itself well to this style of lone-wolf development. Unfortunately, it also conflicts with FFmpeg’s level of code maturity which necessitates that every line of code be carefully scrutinized before it is allowed possible immortality in the mainline tree. This leads to a tremendous amount of orphaned patches. Should FFmpeg maintain such a strict policy? Personally, I agree with the project leader in his position that, if the changes are not made before inclusion, the changes will likely never be made.

There’s another angle that I don’t think was addressed by Jeff’s post. It’s a problem we saw repeatedly on the xine project. Companies who were doing things like set-top media boxes were understandably eager to incorporate xine’s superior — and fee-free – media playback architecture. Naturally, it took some… tweaking and customizations (read: ad-hoc hacks) in order to get the stuff to work just right with a specific setup, and within a deadline. When the project was complete, an engineer would drop a mega-patch with all of their changes to the xine codebase, as mandated by the GNU GPL. And it was quite useless to us.

I’m not sure what to do about the latter case. With the former, it is useful to at least anticipate developing your module in somewhat bite-sized phases that can perhaps be incorporated in separate patches.