Category Archives: FATE Server

Pictures of FATE Machines

Apologies for not properly crediting ideas here: Someone once suggested that it would be useful to have a MultimediaWiki page that collected information about all of the various FATE machines, rather than keeping it in the central FATE database to be displayed in a very inept fashion through the main website. Further, someone (possibly the same person) thought it would be neat to have pictures of all the machines performing FATE duty. I have done both on the new FATE Machines page. Eventually, FATE will simply link over to that wiki page rather than its own internal page.



The machine shown above is pretty much the hardest working computer on the FATE farm. It sits on the floor of my living room and constantly churns away, rebuilding and testing FFmpeg for 22 different compiler configurations.

Other FATE machine administrators are welcome to edit their machine’s descriptions and upload pictures (provided they have physical access to their machines; I’m really not sure about the arrangements that some of you have).

350 Tests

Another milestone of sorts — an even 350 active FATE tests. Thanks to Vitor for figuring out what was wrong with my ea-tgq test. It seems that I was being overzealous with my application of the ‘-idct simple’ option. While normally standard testing procedure for DCT-type codecs, the simple IDCT option made this test overflow, according to Vitor.

I’m really starting to run out of FATE tests to add before I’m forced to stop putting off the fundamental upgrades that would allow me to test the remaining stuff (mostly encoders, muxers, and bit-inexact audio).

I learned something else related to FATE: Don’t mount a suite of FATE samples over wireless if such an arrangement can be avoided. I was able to save around 4 minutes per test cycle on my Mac Mini by not mounting the share with 300 MB of FATE test samples via wireless-G, but instead rsyncing locally. Thus, the Mac Mini, which only has to worry about 2 configurations, tends to be the most frequent builder.

Eating my own rsync repository has the benefit of allowing me to properly test that samples are staged before I activate them, which has bitten us repeatedly.

Bink Video in FFmpeg

Today was the day: Kostya committed his Bink video decoder to FFmpeg. Here’s just one little screenshot:


Screenshot of the attract mode Bink video from Indiana Jones and the Emperor's Tomb

Of course, this is just one Bink file out of the literal thousands of software titles that have incorporated Bink video (the above comes from Indiana Jones and the Emperor’s Tomb for Windows). For this reason, it’s entirely possible that the Bink video decoder (not to mention the Bink audio decoder and the Bink file format demuxer) might not cover all the cases out there. This is especially relevant considering intel I have received from a guy who has talked to the guy who invented Bink and described the development process. The upshot is that there could conceivably be a lot of custom Bink versions out there. That’s why Kostya hopes for a lot of testing with as many different Bink files that people can throw at this system. To that end, I started with my old Multimedia Exploration Journal and did a text search for every game that I recorded as using Bink.

Just think: The next time that YouTube and assorted other video uploading services update their video conversion backends, they can finally be flooded with Bink videos. (I know it seems silly, but I sometimes feel like my biggest contribution to open source multimedia has been to allow people to upload to YouTube video files that they found on their old Sega Saturn CD-ROMs).

As for FATE, is it plausible to get a basic decoding test staged at this point? I ran a simple sample through my RPC testing tool and learned that the video output is bit exact across platforms. Test staged.

(Aside: Thanks to Vitor Sessak, Valgrinder extraordinaire, for locating a memory bug in the Musepack v7 demuxer. Since I created and staged a v7 sample at the same time I staged a sample for the Musepack v8 demuxer, I have already activated a Musepack v7 demuxing test.)

Here’s a project for someone that likes text processing and searching puzzles: Find a simple, efficient method for comparing my list of DOS/Windows games (here’s the HTML list and here it is in CSV) against the big list of known Bink titles and find all the Bink games in my PC game collection. I have already harvested samples from: Alien vs. Predator Gold Edition, Disney’s Atlantis, Gabriel Knight 3, Gods & Generals, Halo 3 (Xbox 360), In Cold Blood, Indiana Jones and the Emperor’s Tomb, Monsters Inc. Wreck Room Arcade, Starlancer, Tony Hawk Pro Skater 2, Uru: Ages Beyond Myst.

More on Adjunct Profiling

People offered a lot of constructive advice about my recent systematic profiling idea. As in many engineering situations, there’s a strong desire to get things correct at the start while at the same time, some hard decisions need to be made or else the idea will never get off the ground.

Code Coverage
A hot topic in the comments of the last post dealt with my selection of samples for the profiling project. It seems that the Big Buck Bunny encodes use a very sparse selection of features, at least when it comes to the H.264 files. The consensus seems to be that, to do this profiling project “right”, I should select samples that exercise as many decoder features as possible.

I’m not entirely sure I agree with this position. Code coverage is certainly an important part of testing that should receive even more consideration as FATE expands its general test coverage. But for the sake of argument, how would I go about encoding samples for maximum H.264 code coverage, or at least samples that exercise a wider set of features than the much-derided Apple encoder is known to support?

At least this experiment has introduced me to the concept of code coverage tools. Right now I’m trying to figure out how to make the GNU code coverage (gcov) tool work. It’s a bumpy ride.

Memory Usage
I think this project would also be a good opportunity to profile memory usage as well as CPU usage. Obvious question: How to do that? I see that on Linux, /proc/<pid>/status contains a field called VmPeak which is supposed to advertise the maximum amount of memory that the process has allocated. This might be useful if I can keep the process from dying after it has completed so that the parent process can read its status file one last time. Otherwise, I suppose the parent script can periodically poll the file and track the largest value seen. Since this is testing long running processes and I think that, ideally, a lot of necessary memory will be allocated up front, this approach might work. However, if my early FATE memories are correct, the child process is likely to hang around as a zombie until the final status poll(). Thus, check the status file before the poll.

Unless someone has a better idea.