Category Archives: FATE Server

Pickled FATE

Some people have been checking out the new test client described in the previous post. So far, most of the questions I have received concern the format of the fate-test-cache.bz2 file downloaded from the FATE server. I admire that people are taking an interest in file format particulars — as you know, I encourage that. It’s nothing too special, though. I simply have a Python script called update-test-cache.py that queries FATE’s test_spec table into an array of dictionary data structures. Then it serializes/marshals/flattens the data using Python’s built-in pickle module. It’s trivial to de-pickle on the client-side. Of course, Python’s bzip2 module helps with size concerns.

What’s the pickle format? Darned if I know, but it works famously, so I don’t really care about reinventing that wheel. Especially when the code for decompressing and deserializing boils down to these 3 lines of Python:

Also, about that rsync command I mentioned in the last post:

rsync -aL rsync://rsync.mplayerhq.hu:/samples/fate-suite/ samples

Does that actually work for anyone? Occasionally, it works for me. Most of the time, it tells me:

rsync: failed to connect to rsync.mplayerhq.hu: Can't assign requested address (49)
rsync error: error in socket IO (code 10) at /SourceCache/rsync/rsync-30/rsync/clientserver.c(94)

which, according to my searches, is a fairly generic network error (at least the bit about assigning the requested address). Since I am usually populating the sample repository manually anyway, this hasn’t been a big problem. But I am trying to be more diligent about making sure the rsync repository is up to date since I expect more people will be using it.

Anyway, FATE growth plods on with 2 new tests tonight: nsv-demux and real-14_4 (weird, I just realized that the db assigned that one ID 144 completely by coincidence).

FATE Testers Wanted

This evening, I finally got my fate-client.py script minimally ready for general consumption. fate-client.py is the unimaginatively named program I threw together some time ago to allow me to validate test specs before I activate them so that FATE will automatically test them. It works like this:

  • download the script (http://fate.multimedia.cx/fate-client.py)
  • rsync the FATE suite of samples that live on mphq: ‘rsync -aL rsync://rsync.mplayerhq.hu:/samples/fate-suite/ samples’ (without the quotes, of course) — this presently amounts to ~150 MB
  • build FFmpeg as normal
  • ‘./fate-client.py -f </path/to/ffmpeg-binary> -s </path/to/fate-suite/samples>’

That’s it. The script will ask the FATE server for a set of test specifications and run through them. You may also need to specify -l/–libpath= if you built and installed FFmpeg with shared libraries. Naturally, ‘./fate-client.py -h’ will spell out all the options.

You would do well to make sure that all the options are valid or else suffer Python bailout exceptions. I just added the command line options tonight and have not made them very resilient. I have been promising this utility for a long time and I wanted to get something out there sooner than later.

Remember that I’m still a rank amateur at Python, so don’t be afraid to call me out if I’m doing anything in the worst Pythonic way imaginable.

Ideas for future improvement:

  • Better logging– Instead of dumping to stdout, maybe dump all the results to a CSV file (for spreadsheet analysis) and/or an HTML file for easy viewing
  • Proper versioning– I track the script via a local git repository, but how do I communicate the current version? Would this be version dd394ef8f3dad056c39ab4e1c76951190621cf8b?
  • Robust error handling
  • Range testing (run all tests up to ID n, or run all tests after ID n, or from IDs m to n)
  • Skip a list of tests (for example, it would be useful to skip test #128 — the internal FFmpeg regression test — since it’s not that helpful in this particular scenario)
  • [Your idea here]

It’s open source, GPL v2, so patches welcome. Moreover, I would love to hear if this script works at all for anyone else. Then, I would like to hear how it works on platforms outside of the 3 that FATE now rigorously tests– I speak of Mac OS X, *BSD, Win32 with either MSVC or MinGW, Open/Solaris on all its various platforms, even PlayStation 3 and whatever else. I actually did get that OpenSolaris VMware session to boot after I waited long enough but I had no idea how to do anything useful with it. That’s when I decided to get down to it and get this script out there so that hopefully someone else will test those platforms.

Extra credit: Figure out why, when bailing out of the test sequence early with Ctrl-C, terminal character echo is off. I.e., the terminal refuses to print keystrokes.

Sunny FATE

Sun recently released a live CD for their OpenSolaris operating system. What’s one more platform for testing FATE? They’re all virtual anyway. Live CDs make us so spoiled these days.


OpenSolaris logo

I got the OS installed in a VMware [Fusion] session. But the operating system wouldn’t boot after the boot loader told it to. I have not had a chance to troubleshoot the issue further.

I wonder what development tools are available under OpenSolaris? Does it just provide gcc? Or is there a proprietary Sun compiler for x86_64? I know Sun invests heavily in compiler optimization, but for their own Sparc hardware; I can’t imagine they would pour any money into making code run well on other CPU architectures.

Memory Efficient FATE

The x86_32 and x86_64 FATE build machines are actually VMware Fusion-hosted virtual machines running in Mac OS X (which, indeed, shall one day perform FATE duty as well). I decided to run with the flock and use Ubuntu for these VMs. However, even with this Mac Mini maxed out to its unofficial limit of 3 gigabytes of RAM, I couldn’t help thinking that they were wasting a lot of resources by running full GNOME desktops. Each VMware session has 1/2 GB of RAM allocated to it and either an 8 or 16 GB disk. Overkill, or so I suspected. I won’t bore you with the tales of the minimal hardware on which I first started running Linux because many of you were once there. I made it a goal to get each build machine down to using a 2 GB disk and hopefully 128 MB of RAM.

I briefly researched Linux distributions that are designed to be tiny, like Damn Small Linux. But I have grown reasonably comfortable with managing Ubuntu and would like to see what I can do to make it lean. At first, I thought I could allocate a small disk, install the desktop CD image, and then either configure minimal installation or remove lots of packages and configure for no-graphics startup after the install. It turns out that Ubuntu desktop does not have much in the way of customization options, save for timezone and keyboard layout. That’s the trade-off they present for being so mainstream and simple. But the CD also can not install if it is only given 2 GB of disk to work with. It just sort of hangs at one point.

There are 2 other install options. First is the “alternate” desktop ISO. The description here sounds promising but in practice, it tries the same desktop installation. The last option is the server installation. I am a little wary of Linux distros designated as “server” distros. I have been bitten at least once by a default server installation that was locked down way too tight, like a guard dog that won’t let me near my own junkyard.

But I give it a whirl anyway and the server version of Ubuntu turns out to be exactly what I was looking for– no graphics, no office or multimedia applications, just a minimal install that only occupies 1/3 of the 2 GB disk.

Next, I check if I can get away with allocating a meager 128 MB or RAM to each system. How to determine absolute memory usage in Linux, what with buffers and caches, etc.? ‘top’ will generally report all memory is being used anyway. Fortunately, I read about a tool called ‘free’, run in a 3-second polling mode:

$ free -o -s 3
             total       used       free     shared    buffers     cached
Mem:        253080     212020      41060          0      36836     101876
Swap:            0          0          0

Actually, I just noticed that ‘top’ also reports the same buffer and cache statistics. Anyway, I had no trouble compiling FFmpeg on x86_32 with only 128 MB of RAM. I tried the same on x86_64. When the build process got to libavcodec/motion_est.c (oh yes, you know the monster of which I speak), I watched the buffers and cached columns gradually shrink while the used column steadily rose. Eventually, the operating system decided that the process was too fat to live. The upshot is that the x86_64 VM gets 256 MB of RAM. How about that? 64-bit programs really do need twice as much memory.

There’s still the issue of disk usage. 2 GB doesn’t stretch as far as it used to. Fortunately, the FATE test suite lives on a different machine. But I need to store many different compilers and hopefully the most recent ccache output for each of the compilers. Big space management strategy here: Configure FFmpeg with ‘–disable-debug’.