Monthly Archives: April 2009

Of Filesystems and Codecs

I have been hanging out at the Linux Foundation Collaboration Summit. One theme I have heard tossed around is the matter of filesystems– ongoing filesystem research, the need to upgrade standard filesystems in Linux, etc. I admit that I don’t spend a lot of time thinking about filesystems (except when I’m writing FUSE drivers for filesystems that lack wide appeal). The filesystem is something that’s just “there” and should just work. Indeed, I have never had a major problem with any filesystem I have used while it is still considered modern. It is only when the next generation comes along that I understand the faults in the previous generation (journaled filesystems helped me understand that extensive integrity checking at boot time doesn’t have to be necessary; anything beyond FAT16 helped me understand that 8.3 filenames didn’t have to be the standard).

But there is a category of obsessed individuals who spend a lot of time thinking about filesystems and measuring what they’re doing and figuring out how they could be doing things better. And it’s a good thing that we have these people around, even though most of us largely view filesystems as a transparent cog in the machine of daily computing.

This got me to thinking about how it’s probably very likely that most computer users view multimedia codecs the same way that I view filesystems. An AVI file might contain Cinepak or MPEG-4 part 2 video, or any of 100+ video codecs. Most users don’t have a reason to care about the difference. This may help to explain why some people (not particularly well-versed in multimedia technology) take it for granted that Theora could easily replace H.264 in all applications where the latter is in use today.

They’re both video codecs, right?

Performance Smackdown, Now With 64-bit

Another in my continuing series of compiler performance reports– that is, the performance of straight C code when compiled by assorted compilers. Pursuant to round 3, I downloaded the long, free, hi-def H.264/AAC movie to profile, as suggested by Reimar and profiled that. It takes 11-15 minutes to decode the entire thing on my 2.13 GHz Core 2. No matter; my machine is patient, and here are the results:


icc vs gcc performance chart when running FFmpeg, round 4

“gcc-svn” is gcc 4.4.0-svn, revision 143046, built on 2009-01-03, same as before.

All validations passed. Further, I used “march=pentium4” as suggested by Flameeyes, on compilers that supported the option but not “march=core2” (gcc 3.4.6, 4.0.4, 4.1.2, and 4.2.4). I think that improved performance for those, but I won’t know for sure unless I run with the original MPEG-4 part 2/MP3 movie from the previous tests.

I also took this opportunity to see how native 64-bit builds performed on the same machine. I hope one day to get Intel’s 64-bit compiler working so it can be included in the competition:


Profiling 64-bit code using FFmpeg

For this test, I didn’t specify any compiler optimizations from the command line. Let me know if that should change for the next round. “gcc-svn” is a little more up to date at gcc 4.4.0-svn, revision 144720, built on 2009-03-08.

Lingering TODO: Investigate if Acovea can help in this process.

See Also:

FATE Software Ecosystem

Thanks to Vitor for taking my FATE Python script and modifying it to run an exhaustive series of Valgrind tests. He found and logged a series of issues in the FFmpeg issue tracker with this knowledge, and shared with me his method. It got me to thinking…

Can I now claim that there is a software ecosystem around FATE?

Anyway, once I finally get the infrastructure in place to run less frequent tasks, you can be sure this will be among the jobs.

The Visibility Phase

Do me a favor– check out my new experimental prototype of the FATE front page, one in which results are available immediately after they are logged by a build machine (no up-to-15-minute cache delay). There is a lot it doesn’t do yet. And of course, I’m still terrible at web development, so it’s still hideous and awkward. However, it is now possible to freely sort the build/test results by 3 criteria– the default is to sort by failed builds first, then by ascending “tests passed” numbers, and then by architecture. No particular reason for that last default, but the first 2 are intended to illustrate immediately where the current problems lie within the FFmpeg codebase. Since the criteria are specified through the URL via a GET request, you can easily bookmark your favorite sort order. In the future, I hope to send out a cookie so that the main page at least remembers what your last sort order was.

Let me know if I’m on the right track with this.

Very basic TODO: When selecting new criteria, make sure the list boxes are preset to those chosen criteria rather than the global defaults. (I told you I’m bad at this.)