Breaking Eggs And Making Omelettes

Topics On Multimedia Technology and Reverse Engineering


The Freezing Point of Brussels Sprout Soda

August 31st, 2009 by Multimedia Mike

In the interest of using the internet to facilitate original scientific research, I am publishing the results of my inadvertent experiment regarding the relative freezing points of various novelty sodas.

Jones holiday soda

Revenge of the White Elephant
At a white elephant gift exchange a few years ago, I received a re-gifted set of novelty holiday-themed sodas created by the Jones Soda Company. They created several of these over the years. This one had pumpkin pie, cranberry, wild herb stuffing, turkey and gravy, and — sigh — Brussels sprout flavors. I never got around (or got up the courage) to sample them and they have languished in my bachelor refrigerator ever since. The reason for the “bachelor” qualifier is that it stays fairly empty and I generally keep a pretty good mental inventory of its contents. Imagine my surprise when I noticed a strange, light green liquid on the bottom.

It smelled strange but I figured it was coming from a plastic container containing a friend’s homemade pickles. I removed the pickles from the equation and cleaned up the liquid. Later on, I noticed some more liquid had collected. The inside bulb is burned out (remember, bachelor fridge) so it’s a bit dark inside. However, I eventually spied a shard of clear glass.

Jones brussels sprout soda bottle, broken

So it seems that the Brussels sprout soda had frozen and expanded until the glass bottle could no longer contain the block. That explains the sound of crashing glass I recall hearing the night before which emanated in the general vicinity of the kitchen.

The bottle was in pretty bad shape but much of the soda was still frozen inside what remained of the bottle. That it must have been gradually melting explains why there was more fluid sometime after the initial cleanup. I immediately, but gingerly, removed the other 4 bottles for fear that they might be ready to burst as well. But none of the 4 showed any sign of freezing.

What to conclude? Either the brussels sprout soda has a significantly higher freezing point than the other 4 flavors and was adversely affected during a freak temperature drop; or the Brussels sprout soda was situated in a colder section of the refrigerator. It’s also possible that all of them were affected by the temperature event but the others didn’t make it to the breaking point before the event reversed.

I can theorize about it all day. But in the end, I need to clean it up. How does this pertain to multimedia hacking? Well, I was going to add long-overdue test cases to FATE tonight, but that may have to wait. Fortunately, I was at the end of a shopping cycle and all I had to toss were some soda-saturated bananas. I’m keeping the butter since I don’t think it was affected, much. To any coworkers reading: if my cookies taste vaguely of Brussels sprouts over the next month, then, well… I happen to know that’s the closest some of you will come to consuming a vegetable all month.

And I never got to taste the Brussels sprout soda. Actually, that’s the part about this episode that bothers me the least.

Posted in General | 4 Comments »

OpenCL On The Horizon

August 25th, 2009 by Multimedia Mike

Mac OS X 10.6, a.k.a. Snow Leopard, is slated for release at the end of this week. One of the most interesting features I have read about is OpenCL support, a parallelization framework.

So how about it? What kind of possibilities does this hold for something like FFmpeg? The pedagogical example in the Wikipedia article demonstrates partitioning a fast Fourier transform so that it can be handled as separate work units, possibly by separate CPUs. I doubt that it would make a (positive) difference to, e.g., split up all of the inverse transforms during a video frame decode.

I really can’t judge the spec by the one example. Perhaps I should, at the very least, read the overview slides available here.

Sometimes I think that it doesn’t help my development as a programmer and computer scientist that I view every single technological development that comes down the pike through the fairly narrow lens of multimedia hacking.

Posted in Programming | 2 Comments »

Video Ads In Magazines

August 21st, 2009 by Multimedia Mike

I am greatly anticipating learning more about how this technology works: Video appears in paper magazines. Copies of Entertainment Weekly (a U.S. entertainment magazine) will have small, presumably flexible screens that are supposed to be able to store 40 minutes of video. The magazines are slated to go on sale in Los Angeles and New York next month. With any luck, San Francisco (which I am near) may see a few as well.

Americhip Demo

The BBC article reports that the underlying chip technology is supposed to be similar to the stuff found in singing greeting cards. That sounds like an oversimplification. But the article also names the tech supplier– Americhip, the self-proclaimed leader in multisensory marketing. They have a YouTube channel with demos of this and related technology.

Posted in Multimedia PressWatch | 1 Comment »

FATE Opportunities In The Cloud

August 20th, 2009 by Multimedia Mike

Just a few months ago, I couldn’t see any value in this fad known as cloud computing. But then, it was in the comments for that blog post that Tomer Gabel introduced me to the notion of using cloud computing for, well, computing. Prior to that, I had only heard of cloud computing as it pertained to data storage. A prime example of cloud computing resources is Amazon’s Elastic Compute Cloud (EC2).

After reading up on EC2, my first thought vis-à-vis FATE was to migrate my 32- and 64-bit x86 Linux compilation duties to a beefy instance in the cloud. However, the smallest instance currently costs $73/month to leave running continously. That doesn’t seem cost effective.

My next idea was to spin up an instance for each 32- or 64-bit x86 Linux build on demand when there was new FFmpeg code in need of compilation. That would mean 20 separate instances right now, instances that each wouldn’t have to run very long. This still doesn’t seem like a very good idea since instance computing time is billed by the hour and it’s rounded up. Thus, even bringing up an instance for 8 minutes of build/test time incurs a full billable hour. I’m unclear on whether bring-up/tear-down cycles of, say, 1 minute each, are each billed as separate compute hours, but it still doesn’t sound like a worthwhile solution no matter how I slice it.

A different architecture occurred to me recently: For each new code revision, spin up a new, single core compute instance and run all 20 build/test configurations serially. This would at long last guarantee that each revision is being built, at least for 32- and 64-bit x86 Linux. It’s interesting to note that I could easily quantify the financial cost of an SVN commit– if it took, say, 2-3 hours to build and test the 20 configs, that would amount to $.30 per code commit.

Those aren’t the only costs, though. There are additional costs for bandwidth, both in and out (at different rates depending on direction). Fortunately, I designed FATE to minimize bandwidth burden, so I wouldn’t be worried about that cost. Understand, though, that data on these compute instances is not persistent. If you need persistent storage, that’s a separate service called Elastic Block Store. I can imagine using this for ccache output. Pricing for this service is fascinating: Amazon charges for capacity used on a monthly basis, naturally, but also $.10 per million I/O requests.

This is filed under “Outlandish Brainstorms” for the time being, mostly because of the uncertain and possibly intractable costs (out of my own pocket). But the whole cloud thing is worth keeping an eye on since it’s a decent wager that prices will only decline from this point on and make these ideas and others workable. How about a dedicated instance loaded with the mphq samples archive, iterating through it periodically and looking for crashers? How about loading up a small compute instance’s 160 GB of space with a random corpus of multimedia samples found all over the web (I think I know people who could hook us up) which would start periodically and process random samples from the corpus while tracking and logging statistics about performance?

Posted in Outlandish Brainstorms | 1 Comment »

« Previous Entries