Yearly Archives: 2009

OpenCL On The Horizon

Mac OS X 10.6, a.k.a. Snow Leopard, is slated for release at the end of this week. One of the most interesting features I have read about is OpenCL support, a parallelization framework.

So how about it? What kind of possibilities does this hold for something like FFmpeg? The pedagogical example in the Wikipedia article demonstrates partitioning a fast Fourier transform so that it can be handled as separate work units, possibly by separate CPUs. I doubt that it would make a (positive) difference to, e.g., split up all of the inverse transforms during a video frame decode.

I really can’t judge the spec by the one example. Perhaps I should, at the very least, read the overview slides available here.

Sometimes I think that it doesn’t help my development as a programmer and computer scientist that I view every single technological development that comes down the pike through the fairly narrow lens of multimedia hacking.

Video Ads In Magazines

I am greatly anticipating learning more about how this technology works: Video appears in paper magazines. Copies of Entertainment Weekly (a U.S. entertainment magazine) will have small, presumably flexible screens that are supposed to be able to store 40 minutes of video. The magazines are slated to go on sale in Los Angeles and New York next month. With any luck, San Francisco (which I am near) may see a few as well.


Americhip Demo

The BBC article reports that the underlying chip technology is supposed to be similar to the stuff found in singing greeting cards. That sounds like an oversimplification. But the article also names the tech supplier– Americhip, the self-proclaimed leader in multisensory marketing. They have a YouTube channel with demos of this and related technology.

FATE Opportunities In The Cloud

Just a few months ago, I couldn’t see any value in this fad known as cloud computing. But then, it was in the comments for that blog post that Tomer Gabel introduced me to the notion of using cloud computing for, well, computing. Prior to that, I had only heard of cloud computing as it pertained to data storage. A prime example of cloud computing resources is Amazon’s Elastic Compute Cloud (EC2).

After reading up on EC2, my first thought vis-à-vis FATE was to migrate my 32- and 64-bit x86 Linux compilation duties to a beefy instance in the cloud. However, the smallest instance currently costs $73/month to leave running continously. That doesn’t seem cost effective.

My next idea was to spin up an instance for each 32- or 64-bit x86 Linux build on demand when there was new FFmpeg code in need of compilation. That would mean 20 separate instances right now, instances that each wouldn’t have to run very long. This still doesn’t seem like a very good idea since instance computing time is billed by the hour and it’s rounded up. Thus, even bringing up an instance for 8 minutes of build/test time incurs a full billable hour. I’m unclear on whether bring-up/tear-down cycles of, say, 1 minute each, are each billed as separate compute hours, but it still doesn’t sound like a worthwhile solution no matter how I slice it.

A different architecture occurred to me recently: For each new code revision, spin up a new, single core compute instance and run all 20 build/test configurations serially. This would at long last guarantee that each revision is being built, at least for 32- and 64-bit x86 Linux. It’s interesting to note that I could easily quantify the financial cost of an SVN commit– if it took, say, 2-3 hours to build and test the 20 configs, that would amount to $.30 per code commit.

Those aren’t the only costs, though. There are additional costs for bandwidth, both in and out (at different rates depending on direction). Fortunately, I designed FATE to minimize bandwidth burden, so I wouldn’t be worried about that cost. Understand, though, that data on these compute instances is not persistent. If you need persistent storage, that’s a separate service called Elastic Block Store. I can imagine using this for ccache output. Pricing for this service is fascinating: Amazon charges for capacity used on a monthly basis, naturally, but also $.10 per million I/O requests.

This is filed under “Outlandish Brainstorms” for the time being, mostly because of the uncertain and possibly intractable costs (out of my own pocket). But the whole cloud thing is worth keeping an eye on since it’s a decent wager that prices will only decline from this point on and make these ideas and others workable. How about a dedicated instance loaded with the mphq samples archive, iterating through it periodically and looking for crashers? How about loading up a small compute instance’s 160 GB of space with a random corpus of multimedia samples found all over the web (I think I know people who could hook us up) which would start periodically and process random samples from the corpus while tracking and logging statistics about performance?

Klondike Moon SEQ

I played an old DOS game a few months ago by the title of Klondike Moon. I really didn’t comprehend it at all. The gameplay dealt with outerspace mining while the storyline was something about paying off your debt with the proceeds of your labor while also actively thwarting your opponents from making good on theirs. That struck me as odd– it wasn’t about stealing what they had, it was merely a scorched earth matter to ensure that they couldn’t prosper.


Klondike Moon Title

But taking a second look at it recently, I noticed that the CD-ROM has a VIDEOS/ subdirectory. Clearly, this directory holds the FMV for the game. Each FMV is actually spread across 3 files: A .VID file (I’m presuming this is the video data), a .SFX file (looks to be raw, unsigned, 8-bit PCM), and a small .SEQ file (I suspect this ties all the data together). There are 23 .SEQ files which are either 26, 37, 103, 114, 158, 312, or 1280 bytes large. These numbers all happen to be divisible by 11 if 15 is first subtracted away which leads me to believe that each contains a 15-byte header followed by a series of 11-byte records.

Meanwhile, the .VID files clearly begin with 768-byte palette. I don’t think that the frames are uncompressed, paletted images, or else the frames are not a common width.

I’m trying to remember a formula — I seem to recall something from the discrete math branch of mathematics — for doing remainder math, something involving an operator that looks like an equal sign but with 3 bars instead of the customary 2. It turns out that the concept I am searching for is modular arithmetic. I was hoping that this could lead me to a formula that would show me possible frame dimensions given the size of the files, but I’m too tired to figure it out right now. You’re welcome to study the files and their sizes, though.

MultimediaWiki page and samples, as is customary.