Monthly Archives: January 2010

IETF Request For Codec

The IETF has recently put out a request for an audio codec. This may strike some of you as remarkable that anyone would need another audio codec since, at the time of this writing, we have cataloged 137 audio codecs via the MultimediaWiki. You have to give the request some attention, though– it acknowledges that there are already lots and lots of audio codecs in existence and explains why each category is unsuitable to the goals of the request. I’m not going to be the one to audit every one of those 137 codecs and identify why each is unsuitable for the outlined goals.

I am a bit concerned about some of their stated goals, such as the very first one: “Designing for use in interactive applications (examples include, but are not limited to, point-to-point voice calls, multi-party voice conferencing, telepresence, teleoperation, in-game voice chat, and live music performance).” Generally, one of those examples is not like the others (unless, perhaps, “live music performance” refers to a cappella singing. Then again, the request later states that optimizing for very low bitrates (2.4 kbps and lower) is out of scope.

FFmpeg Introspection

I accidentally used the main ‘ffmpeg’ binary as an input to itself. Its best guess is that it’s an MP3 container with MPEG-1, layer 1 audio data:

[NULL @ 0x1002600]Format detected only with low score of 25, misdetection possible!
[mp1 @ 0x1003600]Header missing
    Last message repeated 35 times
[mp3 @ 0x1002600]max_analyze_duration reached
[mp3 @ 0x1002600]Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from 'ffmpeg_g':
  Duration: 00:03:20.29, start: 0.000000, bitrate: 256 kb/s
    Stream #0.0: Audio: mp1, 32000 Hz, 2 channels, s16, 256 kb/s
At least one output file must be specified

What an Easter egg it would be if the compiled binary could actually decode to something — anything — valid.

HTMLOL5 Video

Last week brought us a lot of news in the web browser space: Mozilla released Firefox 3.6 (nice fullscreen video, BTW, especially on Linux); YouTube and Vimeo grabbed headlines by announcing HTML5 video support for their video sites.

I resolved a few months ago to not bother reading so many tech news sites since they consist of 99% misinformed drivel, and I’m a happier person for that decision. But when there’s big news that can be seen as tangentially related to what I do at my day job, it gets hard to resist.

From everything I read, there was surprisingly little Flash hatred in the wake of these announcements. Really, the situation just erupted into an all-out war between the devotees of Firefox (and to a lesser extent, Opera) and supporters of Google (and to a lesser extent Apple and their Safari browser). It gets boring and repetitive in a hurry when you start reading these discussions since they all go something like this:


HTML5 Video Tag Arguments

As you can see from the infographic, at least both sides can agree on something. I would also like to state my emphatic support for Mozilla’s principled, hardline stance against the MPEG stack for HTML5 video. Please don’t budge on your position. Stand firm on the moral high ground.

That graphic is just the beginning; there are so many problems with HTML5 video that it’s hard to know where to even begin. That’s why I need to remember to just laugh gently at its mention and move along. I only get a headache trying to understand how HTML5 video could ever have the slightest chance of mattering in the grand scheme of things.

However, a pleasant side effect of this attention is that more and more people are actually being exposed to the video tag. One nagging detail people invariably notice is that the video tag performs exceptionally poorly, likely because browsers have to deal with the exact same limitations that the Flash Player does, namely, converting decoded YUV data to RGB so that it can be plopped on a browser page. And if you try to claim that you can just download the media and use a standalone player, you continue to miss the entire point of web video.

Another aspect I have to appreciate about the debate surrounding HTML5 video is the way that it brings out the positive spirit in people. Online discussions are normally overwhelmingly negative. But advocates of the HTML5/Xiph approach truly believe this could all work out: If Apple decides to adopt the Xiph stack, and if some benevolent hardware company would churn out custom ASICs for decoding Xiph codecs, and if those ASICs were adopted in next quarter’s array of mobile computing devices and netbooks, and if Google transcodes their zillobytes of YouTube videos to the Xiph stack, and if Google throws the switch and forces the 60% of IE-using stragglers to either change browsers or go without YouTube, and if Google thereby forgoes many opportunities to monetize their videos, then absolutely! HTML5 video could totally unseat Flash video.

Okay, that’s it for me. I’m going to go back to ignoring the insular, elitist tech world at large except for the few domains in which I have some influence.

See Also:

Systematic Benchmarking Adjunct to FATE

Pursuant to my rant on the futility of comparing, performance-wise, the output of various compilers, I wholly acknowledge the utility of systematically benchmarking FFmpeg. FATE is not an appropriate mechanism for doing so, at least not in its normal mode of operation. The “normal mode” would have each of every configuration (60 or so) running certain extended test specs during every cycle. Quite a waste.

Hypothesis: By tracking the performance of a single x86_64 configuration, we should be able to catch performance regressions in FFmpeg.

Proposed methodology: Create a new script that watches for SVN commits. For each and every commit (no skipping), check out the code, build it, and run a series of longer tests. Log the results and move on to the next revision.

What compiler to use? I’m thinking about using gcc 4.2.4 for this. In my (now abandoned) controlled benchmarks, it was the worst performer by a notable margin. I’m thinking that the low performance might help to accentuate performance regressions. Is this a plausible theory? 2 years of testing via FATE haven’t revealed any other major problems with this version.

What kind of samples to test? Thankfully, Big Buck Bunny is available in 4 common formats:

  • MP4/MPEG-4 part 2 video/AC3 audio
  • MP4/H.264 video/AAC audio
  • Ogg/Theora video/Vorbis audio
  • AVI/MS MPEG-4 video/MP3 audio

I have the 1080p versions of all those files, though I’m not sure if it’s necessary to decode all 10 minutes of each. It depends on what kind of hardware I select to run this on.

Further, I may wish to rip an entire audio CD as a single track, encode it with MP3, Vorbis, AAC, WMA, FLAC, and ALAC, and decode each of those.

What other common formats would be useful to track? Note that I only wish to benchmark decoding. My reasoning for this is that decoding should, on the whole, only ever get faster, never slower. Encoding might justifiably get slower as algorithmic trade-offs are made.

I’m torn on the matter of whether to validate the decoding output during the benchmarking test. The case against validation says that computing framecrc’s is going to impact the overall benchmarking process; further, validation is redundant since that’s FATE’s main job. The case for validation says that since this will always be run on the same configuration, there is no need to worry about off-by-1 rounding issues; further, if a validation fails, that data point can be scrapped (which will also happen if a build fails) and will not count towards the overall trend. An errant build could throw off the performance data. Back on the ‘against’ side, that’s exactly what statistical methods like weighted moving averages are supposed to help smooth out.

I’m hoping that graphing this idea for all to see will be made trivial thanks do Google’s Visualization API.

The script would run continuously, waiting for new SVN commits. When it’s not busy with new code, it would work backwards through FFmpeg’s history to backfill performance data.

So, does this whole idea hold water?

If I really want to run this on every single commit, I’m going to have to do a little analysis to determine a reasonable average number of FFmpeg SVN commits per day over the past year and perhaps what the rate of change is (I’m almost certain the rate of commits has been increasing). If anyone would like to take on that task, that would be a useful exercise (‘svn log’, some text manipulation tools, and a spreadsheet should do the trick; you could even put it in a Google Spreadsheet and post a comment with a link to the published document).