Yearly Archives: 2008

YUV4MPEG2 Origin

Does anyone know where the YUV4MPEG2 format comes from? If you google for it, you will find the afore-linked MultimediaWiki page. Does that imply that multimedia.cx, in conjuntion with FFmpeg, control the ‘standard’, such as it is?

The reason I am wondering is that I have some experiments I wish to perform that involve raw video data that does not necessarily conform to any of the formats already enumerated in Y4M’s wiki description.

FATE Compiler Updates

It’s compiler upgrade time for FATE. That means upgrading the gcc SVN snapshots for each of the platforms. Also, gcc 4.3.1 was recently released and using it fixes the FFmpeg regression suite that was previously broken on x86_64 when compiling with gcc 4.3.0. I am also taking this opportunity to upgrade the Intel C compiler to the latest (10.1.015). This is still in process and I hope it won’t be too painful (unpacking and installing a proprietary system packaged inside an RPM on a DEB-based system; managing licenses afterwards).

Also, all of the configurations now flex a wider array of options: –disable-debug –enable-gpl –enable-postproc –enable-avfilter –enable-avfilter-lavf –enable-shared. –disable-debug really keeps the build size down (handy since I have allotted precious little disk space to these VM build appliances). –enable-shared ensures that FATE is now testing shared library functionality. Further, libswcale is built separately in each command line, though not configured into the main FFmpeg build– I have been advised that doing so would louse the regression tests.

Anyway, this is all greatly facilitated by the fact that I finally got around to upgrading my private admin script so that I can actually edit build configurations through a web interface. It’s quite arduous to maintain this stuff through the MySQL command line console.

Oh wow– I just noticed that even the gcc-SVN build for x86_64 passes the regression suite. Good stuff.

The Downside Of Contributions

The prolific Jeff Atwood has a blog post entitled Don’t Go Dark which describes the issue of programmers retreating into their chambers for months on end to create the perfect feature; at the conclusion, said programmers drop the feature on the community at large hoping for its immediate and wholesale incorporation into the project’s mainline codebase. As you can imagine, FFmpeg lends itself well to this style of lone-wolf development. Unfortunately, it also conflicts with FFmpeg’s level of code maturity which necessitates that every line of code be carefully scrutinized before it is allowed possible immortality in the mainline tree. This leads to a tremendous amount of orphaned patches. Should FFmpeg maintain such a strict policy? Personally, I agree with the project leader in his position that, if the changes are not made before inclusion, the changes will likely never be made.

There’s another angle that I don’t think was addressed by Jeff’s post. It’s a problem we saw repeatedly on the xine project. Companies who were doing things like set-top media boxes were understandably eager to incorporate xine’s superior — and fee-free – media playback architecture. Naturally, it took some… tweaking and customizations (read: ad-hoc hacks) in order to get the stuff to work just right with a specific setup, and within a deadline. When the project was complete, an engineer would drop a mega-patch with all of their changes to the xine codebase, as mandated by the GNU GPL. And it was quite useless to us.

I’m not sure what to do about the latter case. With the former, it is useful to at least anticipate developing your module in somewhat bite-sized phases that can perhaps be incorporated in separate patches.

English Phonetic CAPTCHA

Jeff Atwood writes of automated spamming recently in Designing For Evil. The ensuing discussion.presented plenty of technical anti-spam brainstorms as well as the usual violent anti-spammer fantasies. However, one interesting insight I gained from the comment thread concerned the automated nature of Wikipedia’s anti-spam measures:

There is an IRC channel that receives every edit done to Wikiepdia, a bot then check the page for known bad URLs and string and reverts if necessary.

Aha! So it isn’t just a global network of diligent and vigilant volunteer Wikipedians keeping the content clean. That always struck me as largely intractable and learning this punctures the starry-eyed ethos behind the Wiki concept. But I did a little research and it seems to be a real thing.

I suppose something like that would be vast overkill for the MultimediaWiki. As the discussion also details, not all public discussion forums are created equally in terms of attractiveness to spammers, and the MultimediaWiki would probably be pretty far down the list. Some kind of registration CAPTCHA would probably be adequate. And now that I understand a little more about PHP programming thanks to FATE, I may have enough knowledge to try my hand at such a system.

Hey, here’s a CAPTCHA idea that I have entertained: Call it a phonetic CAPTCHA and challenge the user to type in the proper English word with a certain phonetic pronunciation; for example: KAH MEW NIK AY SHUNZ (communications). I was inspired by Infocom’s old Planetfall interactive fiction game where things were labeled phonetically. Perhaps it discriminates against non-native English speakers (and the less educated among the native set) as well as the spambots, but I guess every measure has its pros and cons.