Monthly Archives: February 2009

The Women Of Webhosting

Some of you may have noticed that the various websites hosted here on were having a tad bit of difficulty recently. Long story short: My previous web host was having some serious problems and I decided it was time to ditch them and move on to a better one. Fortunately, I had (and continue to maintain) consistent, automated backups of everything hosted on But I really wasn’t looking forward to the task of finding a new provider. Whenever I have studied web host providers in the past, they all seem pretty much the same– offering UNLIMITED EVERYTHING!! along with perfect uptime and reliability for next to nothing(*** see details below in 5-point font). And most of their websites boast a design style reminiscent of the worst e-marketing sites and guaranteed to annoy the utilitarian, tech-savvy geek.

When it boils right down to it, I think I was being asked to make a decision regarding a new web host based on the female smiling at me on the front page. Honestly, these photos were generally the only distinguishing feature among the various services:

Miss 1&1
Miss 1&1 Hosting
Miss midPhase
Miss midPhase
Miss Fast Domain
Miss FastDomain

Continue reading

Camp Luna

I remember when the Mono people first announced the Moonlight project for Linux that would interoperate with Microsoft’s Silverlight. They claimed that Microsoft would release a special binary codec pack that would allow Linux users to play back their proprietary media codecs. However, this codec pack would not be allowed for use in any other application, like FFmpeg or GStreamer. How are they going to enforce that? Or so I wondered. Tonight I learned how.

I started investigating the API of the binary codec pack blobs a few weeks ago. I got as far as figuring out how Moonlight registers the codecs. Then I lost motivation, in no small part because there isn’t that much in the blob that I would deem interesting (perhaps one method for keeping people from sorting out the API). In the comments of the last post on the matter, people wondered if the codec pack included support for WMA Voice, which is still unknown. I can’t find any ‘voice’ strings in the blob. However, I do find references to lossless coding. This might pertain to Windows Lossless Audio, or it could just be a special coding mode for WMA3 Pro. Either way, I’m suddenly interested.

So I looked for interface points in the Moonlight source. Moonlight simply loads and invokes registration functions for WMA, WMV, and MP3. The registration functions don’t return any data that Moonlight stores. Moonlight doesn’t appear to load (via dlsym()) or invoke any other codec pack functions directly. So how can it possibly be interfacing? The only other way the interaction could flow is if the codec pack shared library was invoking functions in Moonlight…

Oh, no… they wouldn’t do that, would they?

Continue reading

ARM On FATE Is A Reality

Thanks to Måns for modifying the FATE script in a way that supports automatically cross compiling FFmpeg for a different target CPU on a faster host machine, transferring the binary to a machine specimen that runs the target CPU in question, and remotely asking the target CPU to run the battery of FATE test specs. The upshot of all of this is that FATE is effectively running on an ARM-equipped Beagle Board and contributing results back that anyone can view via the main FATE page.

I hope to get his changes rolled into the main script soon. It’s great work, and I’m hard-pressed to name another continuous integration system that can operate on such diverse platforms, environments, and circumstances.

I dusted off my old Sega Dreamcast this evening — the one I used to do homebrew programming on — and enjoyed some games. As I was playing, I realized that the next evolution of FATE would be to get it to continuously run automatic cross-compile and test cycles on the Dreamcast’s SH-4 via a custom serial protocol, similar to what John Koleszar described in this comment.

But I have a few more FFmpeg code paths to cover before I can even think about that.

Encoding And Muxing FATE

One weak spot in FATE‘s architecture has to do with encoding and muxing formats. So far, I have been content to allow the master regression suite handle the encode/mux tests for the most important formats that FFmpeg supports. But the suite doesn’t handle everything. Plus, I still have the goal of eventually breaking up all of the regression suite’s functionality into individual FATE test specs.

At first, the brainstorm was to encode things directly to stdout so that nothing ever really has to be written to disk. The biggest problem with this approach is that stdout is non-seekable. For formats that require seeking backwards, this is a non-starter (e.g., a QuickTime muxer will always have to seek back to the start of the file and fill in the total size of the mdat atom that was just laid down on disk).

So it’s clear that an encode/mux test needs to commit bytes to some seekable media. Where is it okay to write to disk? I think $BUILD_PATH should be okay, since the build step already writes data there.

The natural flow of the master regression suite is to encode/mux a test stream, run a hash of the resulting stream, then demux/decode the encoded stream and run a hash on the result. In FATE, I imagine these 2 operations would be split into 2 separate tests. But how can I guarantee that one will run before the other? Right now, there’s no official guarantee of the order in which the test specs run. But I can — and plan to — change the policy via so that the tests are always run in order of their database IDs. That way, I can always guarantee that the encode/mux test is run and the correct sample file is waiting on disk before the corresponding demux/decode test executes.

I also need a way to verify the contents of the encoded file on disk. I think this can be handled via a new special directive along the lines of:

{MD5FILE,$BUILD_PATH/output} $BUILD_PATH/ffmpeg -i $SAMPLES_PATH/input/input -f format -y $BUILD_PATH/output

This will read the bytes of the file ‘output’ and compute the MD5 hash. This seems simple enough in a local environment. But it is another item that may pose challenges in the cross-FATE architecture I am working on with Mans which will support automated testing on less powerful/differently-targeted platforms.