Author Archives: Multimedia Mike

The Quest For Decompilation

Every now and then, someone comes along and writes a short novel of a comment on an older post. Such was the case when Sean Barrett used the occasion of my What RE Looks Like post to take three hours of his rather busy life and compose a “symbolic executor” — a basic block decompiler. It’s a valiant effort and I would like to try my hand at it, as with all RE tools. I am having trouble compiling the source he posted (I converted CR format, but I am still having trouble with missing symbols from his custom library-in-a-header-file). It works on Microsoft-compatible disassembly output but probably would not be too hard to adapt for ‘objdump -Mintel’ in the GNU toolchain.

Many people have gone down this basic block disassembly road. The details are hazy but I seem to recall that I have made the journey as well. It’s a good thing I keep this blog as a journal. I guess the reason I can’t remember what my experiment was called is because it was the “Unnamed RE Project”. It looks like all I accomplished there was straight ASM -> C translation without any effort at higher level language abstraction.

Anyway, I still maintain that figuring out the overriding purpose of these basic blocks is not the biggest challenge in traditional binary reverse engineering– indeed, I personally consider it the most interesting part. No, what I think is the toughest part is figuring out — or more likely guessing — what the sometimes hundreds of referenced variables are actually used for, and assigning them appropriate names. The biggest nightmare is when functions pass around multiple gigantic nested structures and actually use a bunch of variables within.

In other words, true understanding of the underlying algorithm is the goal. But, Sean, I still want to try your tool.

Swiss Patent Survey

Sometime ago, I complained about all those survey requests that F/OSS developers receive from grad students who insist on surveying people from an academic post vs. obtaining real employment. Normally, I ignore them summarily (and then get testy when the authors send multiple notices or actively follow up to demand why I have not done my part).

However, I have recently been getting survey spam with a slightly different focus. One Marcus Dapp, a Ph.D. student somewhere in Swiss-land, is conducting an exclusive, invitation-only survey about how software patents impact free software projects. Apparently, he doesn’t read Slashdot or any of the thousands of other geek sites out there that consistently lament the topic.

Ironically, I received the survey invite due to my activity with the old TuxNES project (because it’s a Sourceforge project and it’s technically “active” — 89.76% activity last week? huh?), and not due to being on the forefront of the IP powder keg that is multimedia technology. For TuxNES and other 8-bit NES emulators, the patent situation is fairly cut and dried — the NES hardware patents expired years ago.

Multimedia Document Mirror

I started a new effort tonight: I created a mirror of multimedia-related technical documents. I found some documents on my hard drive that have no official home on the internet. Further, it is entirely possible that other documents could disappear at any time. So I am maintaining this mirror. Plus, I have tons of webspace and bandwidth to burn with my hosting plan.

The Wiki page will maintain links to both local mirrors and official links, if official links still exist. The primary Wiki page for a particular subject should link to the official link if it exists, and have a note about the local link as well.

So if you have any orphaned documents laying around that belong on this mirror, please let me know. Things such as MPEG drafts have always been fair game; final MPEG drafts — the kinds for which currency must be exchanged — are not acceptable.

Improving qt-faststart

As this seems to be the homepage of ‘qt-faststart’ according to Google, here are some other useful links:

Moving right along…

It’s weird to think that I may have already written the most popular piece of free software that I will ever program– qt-faststart. It’s easily the most ported of all my programs– at the very least, there are native versions for Mac OS X (Cocoa wrapper) and Adobe AIR.


Classic Apple QuickTime logo

All that qt-faststart does is take an Apple QuickTime file that looks like this:

  [ 'mdat' QT data atom ]    [ 'moov' QT info atom ]

and rearranges it to look like this:

  [ 'moov' QT info atom ]    [ 'mdat' QT data atom ]

Why is this even necessary? In order to stream a QT file via HTTP, the moov atom needs to appear before the mdat atom. So why not write it that way when it is created? That’s tricky to do without 2 passes through the file. Most QT muxers only do the first pass. The qt-faststart utility does the second pass.

I have long maintained that the biggest flaw with Apple’s QuickTime file format is that chunk offsets are specified as absolute file offsets. If they were specified as offsets relative to the start of their mdat chunk, the moov chunk would be easily relocatable.

Are there ways to improve qt-faststart? One thing that would probably be nifty, and perhaps considered standard in this day and age, would be to compress the moov atom. moov atom compression has been around for a long time and operates with the standard zlib algorithm. It seems straightforward enough– take the moov atom from the end of the file, patch the offsets, compress the atom, and put it at the front. But wait– the chunk offsets will all be different since the moov atom is compressed. So, how about compressing the moov atom, assessing the size of the compressed moov atom, patching all of the moov atom’s chunk offsets, and then compressing? But… that would alter the size of the compressed atom since the data has changed. But it probably does not change much. I suspect this is why QuickTime specifies such atom types as ‘free’, ‘junk’, and ‘skip’. These are empty space atoms. Thus, the revised qt-faststart algorithm would probably look like this:

  • Find and load moov atom at end of QT file.
  • Compress moov atom to obtain rough size.
  • Declare moov atom size to be compressed size + factor n.
  • Patch moov atom’s chunk offset atoms to reflect calculated size.
  • Compress and write moov atom.
  • Write empty space atom with the difference between estimated and actual compressed sizes.
  • Copy remainder of QT file.

I wonder what factor n should be? I will probably have to determine this empirically. Or rather than a fixed number or percentage, I wonder if there should be an iterative process for converging on a somewhat optimal compressed size? Not that it matters a great deal; the size of the moov atom generally pales in comparson to the size of the actual multimedia data. It makes one ponder why the moov atom can be compressed in the first place. Alex once proposed that it may provide a layer of data integrity. According to the zlib tech notes, CRCs are used above the compression.

One bit of trivia about the program: qt-faststart does not take into account compressed moov atoms to begin with. I had always considered this a TODO item. However, it has occurred to me that compressed moov atoms probably only ever occur at the beginning of a file. At the very least, I have never received a complaint about qt-faststart being unable to process compressed moov atoms at the end of a file.