Monthly Archives: March 2009

Archiving Binary Builds

Someone recently asked on the ffmpeg-devel mailing list for nightly binary builds of FFmpeg and its associated libraries to be made available for download. The idea seemed to gain traction on the list so I began thinking of how we can make this a reality. Since FATE clients are already building FFmpeg binaries day and night, it’s not a huge leap to consider extending the infrastructure slightly to include packaging the binaries and sending them to a central server. So let’s talk process:

  • After building binaries as part of FATE, make a compressed bundle (.tar.bz2 for Unix, .zip for Windows) and ship the package right here to multimedia.cx. Hey, I have the space to spare (an alleged 500 GB with the current hosting plan, of which I am currently barely using 6 GB, and nearly half of that is FATE data). I also have FTP account options which should facilitate transfer.
  • The filename would follow the naming convention of “ffmpeg-bin-svn-OS-arch-compiler.ext”. So, for example, the build of SVN 18005 from Linux / x86_32 / gcc 4.2.4 would be: ffmpeg-bin-svn18005-linux-x86_32-gcc-4.2.4.tar.bz2.
  • What should be in the package? The FFmpeg binary programs and the associated libraries, of course. But in what hierarchy? A flat hierarchy doesn’t strike me as a good idea (unzip and potentially trash existing files in your current directory). Instead, package the files in, e.g., ffmpeg-svn18005-linux-x86_32-gcc-4.2.4/.
  • There should be a standard, auto-generated README.txt file in the same directory. Actually, perhaps it should be a simple HTML file since I envision it should contain — in addition to basic information such as its revision and the time it was built — a report of which tests passed and which failed, along with links to each report in the FATE database.
  • Create a web interface that allows users to browse among the latest FFmpeg binary builds. Out of everything presented in this brainstorm, this is the step that actually gives me some pause since I don’t exactly know how I would implement it off the top of my head.

Feedback welcome. Otherwise, these are likely to be the decisions I run with for the first iteration of this plan, when I get around to it (think of this as the public comment period).

Critical Mass of FATE

Okay, so you may have noticed that FATE has 3 more machines/platforms from which it is actively accumulating build/test results: FreeBSD/x86_32, DragonFly BSD/x86_32, and Linux/AVR32 (thanks to Michael K. and Måns for running the FATE script on those platforms). I was wondering when a volunteer would step forth to continuously run FATE on some BSD platform.

But the FATE main page is now unbearably unwieldy. When I first put FATE into service, the entire front page was dynamically generated using some horrifically unoptimized queries. A little over a year ago, I rushed a highly naive caching mechanism into production to address this problem. The way it operated was as follows (you might want to be sitting down for this): From a cron job run every 15 minutes, run a Python script that connects to the database server, performs all the queries for fetching data for the main page, and builds a file called main-page-cache.php which is then transferred to the server where it overwrites the old copy. The main index.php script driving the site simply includes the file. Okay, don’t all go contributing this to The Daily WTF at once. Hey, it has performed rather quickly for the last year or so. But it also explains why I couldn’t modify it very easily to allow arbitrary sorting.

Fun fact: At this point, index.php did not strictly need to connect to the database in order to simply render the main page. However, I quickly learned to do a no-op database connection anyway; without it, the end user’s browser would cache the output of the page. I’m sure there must be better browser cache control mechanisms available.

Until I switched web hosts recently, I had to run the Python script on a local machine and scp the results to the server. This explains why, if there was a power failure at home that outlasted the UPS, the main FATE page wouldn’t get updated until I returned home. That problem is solved now by the fact that the cron job and script run on the same server as FATE. At least, the problem was solved until I had to switch back to my prior web host.

After I deployed the caching system, I wondered about ways I could possibly trigger cache page rebuilds automatically after entering new results. Last autumn, I revised the FATE architecture so that results are received through a PHP script. I started to think that, when a configuration sends results through this script, it could update a new PHP cache data file specific to that configuration. The main script could dynamically include a series of these PHP cache files and sort out the data inside.

Then it dawned on me to store the data in an SQLite database with a single, highly non-normalized table. And just when I was about to go to code on that idea, it occurred to me that, as much as I love SQLite, there’s really no reason I can’t put the cache table straight into the main MySQL FATE database. The big advantages will be: No more up-to-15-minute delay before new results show up on main page; and more flexible sorting of the results on the main page.

So the plan goes like this:

  1. Create a non-normalized cache table in the main FATE database that includes config ID, machine ID. OS, architecture, compiler string, latest build ID, timestamp build ID was logged, SVN revision it corresponds to, status of the build, number of tests total, number of tests passed.
  2. Create a script that performs an initial population of this table based on the configurations marked active and their latest build records in the database. It will be necessary to re-run this script where there are new configurations added to the database, or when configuration data is updated (most often happens when I compile a new gcc from SVN).
  3. Modify the data receiver PHP script so that it properly updates the correct row in the table.
  4. Go crazy with the main FATE page. Sorting the data by different fields is as straightforward as an ORDER BY clause in the SELECT statement.

The main page should also send a cookie so that the page “remembers” the user’s last (and presumably preferred) sorting order. That assumes that any of FATE’s users actually browse the web with cookies enabled (doesn’t strike me as likely).

Writing out these ideas is useful for motivating further brainstorms. I just realized that I may as well create a simple method for accessing the latest FATE data via HTTP, perhaps output in a CSV format (no XML, thanks). Perhaps others can think of creative ways to interpret the data and act on it. E.g., maybe someone else can figure out creative ways of sending email and IRC notifications before I can get time to solve those problems.

All Web Hosts Suck

The last few weeks have taught me that all web hosts suck. Ordinarily, I don’t feel it necessary to write a blog post railing and ranting against a company that annoys me; I do my best to let these things go quickly. However, according to the “sucky research method”, WebFaction has thus far managed to escape any negative criticisms. This wouldn’t be such a problem except that they’re a tad smug about it. Further, I think I am having trouble achieving emotional closure and catharsis on this matter since I find no similarly suffering souls out there with whom to commiserate. So it is with a heavy heart that I feel compelled to type out a petty, petulant “WebFaction sucks” post.

Who knows? Maybe I’m just the unluckiest customer a web host has ever had. Continue reading

FFmpeg 0.5 Is Released

If you’re reading this, the multimedia.cx DNS change has propagated, yet again, for another hosting service. But that’s a venomous tale for another blog post. FFmpeg v0.5 hit the web yesterday, right about the time that my new hosting service started having serious problems. Thus, I was unable to observe the occasion at the time.


FFmpeg logo

But now I’m able to let it sink in a bit more… wow. I can’t even remember the last time we had a release and to be honest, I don’t really want to look it up because it’s a little embarrassing to think about. Thanks to all my co-devs who made the release what it is. But thanks especially to Diego Biurrun who stuck his neck out and pushed hard for this release effort starting a little over a month ago. Somehow, we got the release out.

Rest assured, we are taking notes this time around in order to make the process easier next time around. This is the dawn of a new era of open source multimedia (here’s hoping, anyway).