Monthly Archives: June 2009

Madcow In FFmpeg

Suxel Drol continues his relentless quest to add all Electronics Arts multimedia formats to FFmpeg. The most recent fruit of this labor is the Madcow video decoder.


Need For Speed III: Hot Pursuit -- Madcow video in VLC

The test spec is ready to go, as is another test spec for a new demuxer named QCP.

Speaking of that QCP format, that comes courtesy of Dark Shikari, toiling away at his new gig over at Facebook. Facebook, he informs us, might just be the largest user of FFmpeg. They have quite a few members uploading random multimedia files which presents just as formidable a challenge as Picsearch. QCP is just one of the (formerly) unknown formats encountered.

Oh, and I also found a way to fix that pva-demux test that has been failing for over a week. We should have a greener baseline now.

CPU Time Experiment

Science project: Measure how accurately Python measures the time a child process spends on the CPU.

FATE clients execute build and test programs by creating child processes. Python tracks how long a child process has been executing using one number from the 5-element tuple returned from os.times(). I observed from the beginning that this number actually seems to represent the number of times a child process has been allowed to run on the CPU, multiplied by 10ms, at least for Linux.

I am interested in performing some controlled tests to learn if this is also the case for Mac OS X. Then, I want to learn if this method can reliably report the same time even if the system is under heavy processing load and the process being profiled has low CPU priority. The reason I care is that I would like to set up periodic longevity testing that tracks performance and memory usage, but I want to run it at a lower priority so it doesn’t interfere with the more pressing build/test jobs. And on top of that, I want some assurance that the CPU time figures are meaningful. Too much to ask? That’s what this science project aims to uncover.

Methodology: My first impulse was to create a simple program that simulated harsh FFmpeg conditions by reading chunks from a large file and then busying the CPU with inane operations for a set period of time. Then I realized that there’s no substitute for the real deal and decided to just use FFmpeg.

ffmpeg -i sample.movie -y -f framecrc /dev/null

For loading down the CPU(s), one command line per CPU:

while [ 1 ]; do echo hey > /dev/null; done

I created a Python script that accepts a command line as an argument, sets the process nice level, and executes the command while taking the os.times() samples before and after.

Halfway through this science project, Mans reminded me of the existence of the ‘-benchmark’ command line option. So the relevant command becomes:

time ./science-project-measure-time.py "ffmpeg -benchmark -i sample.movie -y -f framecrc /dev/null"

Here is the raw data, since I can’t think of a useful way to graph it. The 5 columns represent:

  1. -benchmark time
  2. Python’s os.times()[2]
  3. ‘time’ real time
  4. ‘time’ user time
  5. ‘time’ sys time
Linux, Atom CPU, 1.6 GHz
========================
unloaded, nice level 0
run 1: 26.378, 26.400, 36.108, 26.470, 9.065
run 2: 26.426, 26.460, 36.103, 26.506, 9.089
run 3: 26.410, 26.440, 36.099, 26.494, 9.357

unloaded, nice level 10
run 1: 26.734, 26.760, 37.222, 26.806, 9.393
run 2: 26.822, 26.860, 36.217, 26.902, 8.945
run 3: 26.566, 26.590, 36.221, 26.662, 9.125

loaded, nice level 10
run 1: 33.718, 33.750, 46.301, 33.810, 11.721
run 2: 33.838, 33.870, 47.349, 33.930, 11.413
run 3: 33.922, 33.950, 47.305, 34.022, 11.849


Mac OS X, Core 2 Duo, 2.0 GHz
=============================
unloaded, nice level 0
run 1: 13.301, 22.183, 21.139, 13.431, 5.798
run 2: 13.339, 22.250, 20.150, 13.469, 5.803
run 3: 13.252, 22.117, 20.139, 13.381, 5.728

unloaded, nice level 10
run 1: 13.365, 22.300, 20.142, 13.494, 5.851
run 2: 13.297, 22.183, 20.144, 13.427, 5.739
run 3: 13.247, 22.100, 20.142, 13.376, 5.678

loaded, nice level 10
run 1: 13.335, 22.250, 30.233, 13.466, 5.734
run 2: 13.220, 22.050, 30.247, 13.351, 5.762
run 3: 13.219, 22.050, 31.264, 13.350, 5.798

Experimental conclusion: Well this isn’t what I was expecting at all. Loading the CPU altered the CPU time results. I thought -benchmark would be very consistent across runs despite the CPU load. My experimental data indicates otherwise, at least for Linux, which was to be in charge of this project. This creates problems for my idea of an adjunct longevity tester on the main FATE machine.

The Python script — science-project-measure-time.py — follows:

Continue reading

Practical Cloud

Who in their right mind would ever want to store their working documents somewhere, out there, “in the cloud”, i.e., on someone else’s servers? I openly wondered this a few weeks ago and have wondered about it ever since the idea was first proposed many years ago.

It turns out that the answer is… me.

Here’s how it happened: I contribute to a video game database named MobyGames. A long time ago, I started creating a series of plain ASCII files to help me track which games aren’t in the database yet. Other people wanted to submit new lists and help me maintain the existing lists. For the last 6 months, I have been occasionally brainstorming and researching how to create a very simple, database-backed, collaborative web application.

Yesterday, I thought of a better solution: A Google spreadsheet. My, that was easy. It pretty much does everything I was hoping my collaborative web app would do and it required zero coding on my part.

People often suggested that I set up a wiki in order to manage this type of data. I generally consider a wiki to be the poor man’s content management system (CMS) — little more than a giant, distributed, collaborative whiteboard (ironically, before I set up the MultimediaWiki on top of MediaWiki, I had again spent a long time brainstorming my own custom database-backed web app for the same purpose). I wanted a little more structure imposed on this data which is exactly what the spreadsheet can provide. A proper database would be even better but I’m willing to compromise for the sake of just having something useful with minimal effort on my part.

Still, I was hoping that writing a simple web app in some kind of existing, open source framework would be a great exercise for making a more complex web app out of FATE. My occasional study of web frameworks during the past 6 months has taught me that that’s something I genuinely don’t wish to mess with.

Lightweight FATE Testing

Over the years, I have occasionally heard about a program called netcat, but it was only recently that I realized what it was– an all-purpose network tool. I can’t count all the times that this tool would have helped me in the past 10 years, nor the number of times I entertained the idea of basically writing the same thing myself.

I have been considering the idea of a lightweight FATE testing tool. The idea would be a custom system that could transfer a cross-compiled executable binary to a target and subsequently execute specific command line tests while receiving the stdout/stderr text back via separate channels. SSH fits these requirements quite nicely today but I am wondering about targeting platforms that don’t have SSH (though the scheme does require TCP/IP networking, even if it is via SLIP/PPP). I started to think that netcat might fit the bill. Per my reading, none of the program variations that I could find are capable of splitting stdout/stderr into separate channels, which is sort of a requirement for proper FATE functionality.

RSH also seems like it would be a contender. But good luck finding the source code for the relevant client and server in this day and age. RSH has been thoroughly disowned in favor of SSH. For good reason, but still. I was able to find a single rshd.c source file in an Apple open source repository. But it failed to compile on Linux.

So I started thinking about how to write my own lightweight testing client/server mechanism. The server would operate as follows:

  • Listen for a TCP connection of a predefined port.
  • When a client connects, send a random number as a challenge. The client must perform an HMAC using this random number along with a shared key. This is the level of security that this protocol requires– just prevent unauthorized clients from connecting. The data on the wire need not be kept secret.
  • The client can send a command indicating that it will be transferring binary data through the connection. This will be used to transfer new FFmpeg binaries and libraries to the target machine for testing.
  • The client can send a command with a full command line that the server will execute locally. As the server is executing the command, it will send back stdout and stderr data via 2 separate channels since that’s a requirement for FATE’s correct functionality.
  • The client will disconnect gracefully from the server.

The client should be able to launch each of the tests using a single TCP connection. I surmise that this will be faster than SSH in environments where SSH is an option. As always, profiling will be necessary. Further, the client will have built-in support for such commands as {MD5} and some of the others I have planned. This will obviate the need to transfer possibly large amounts of data over a conceivably bandwidth-limited device. This assumes that performing such tasks as MD5 computation do not outweigh the time it would take to simply transfer the data back to a faster machine.

I can’t possibly be the first person to want to do this (in fact, I know I’m not– John K. also wants something like this). So is there anything out there that already solves my problems? I know of the Test Anything Protocol but I really don’t think it goes as far as I have outlined above.