Monthly Archives: October 2007

Improving qt-faststart

As this seems to be the homepage of ‘qt-faststart’ according to Google, here are some other useful links:

Moving right along…

It’s weird to think that I may have already written the most popular piece of free software that I will ever program– qt-faststart. It’s easily the most ported of all my programs– at the very least, there are native versions for Mac OS X (Cocoa wrapper) and Adobe AIR.


Classic Apple QuickTime logo

All that qt-faststart does is take an Apple QuickTime file that looks like this:

  [ 'mdat' QT data atom ]    [ 'moov' QT info atom ]

and rearranges it to look like this:

  [ 'moov' QT info atom ]    [ 'mdat' QT data atom ]

Why is this even necessary? In order to stream a QT file via HTTP, the moov atom needs to appear before the mdat atom. So why not write it that way when it is created? That’s tricky to do without 2 passes through the file. Most QT muxers only do the first pass. The qt-faststart utility does the second pass.

I have long maintained that the biggest flaw with Apple’s QuickTime file format is that chunk offsets are specified as absolute file offsets. If they were specified as offsets relative to the start of their mdat chunk, the moov chunk would be easily relocatable.

Are there ways to improve qt-faststart? One thing that would probably be nifty, and perhaps considered standard in this day and age, would be to compress the moov atom. moov atom compression has been around for a long time and operates with the standard zlib algorithm. It seems straightforward enough– take the moov atom from the end of the file, patch the offsets, compress the atom, and put it at the front. But wait– the chunk offsets will all be different since the moov atom is compressed. So, how about compressing the moov atom, assessing the size of the compressed moov atom, patching all of the moov atom’s chunk offsets, and then compressing? But… that would alter the size of the compressed atom since the data has changed. But it probably does not change much. I suspect this is why QuickTime specifies such atom types as ‘free’, ‘junk’, and ‘skip’. These are empty space atoms. Thus, the revised qt-faststart algorithm would probably look like this:

  • Find and load moov atom at end of QT file.
  • Compress moov atom to obtain rough size.
  • Declare moov atom size to be compressed size + factor n.
  • Patch moov atom’s chunk offset atoms to reflect calculated size.
  • Compress and write moov atom.
  • Write empty space atom with the difference between estimated and actual compressed sizes.
  • Copy remainder of QT file.

I wonder what factor n should be? I will probably have to determine this empirically. Or rather than a fixed number or percentage, I wonder if there should be an iterative process for converging on a somewhat optimal compressed size? Not that it matters a great deal; the size of the moov atom generally pales in comparson to the size of the actual multimedia data. It makes one ponder why the moov atom can be compressed in the first place. Alex once proposed that it may provide a layer of data integrity. According to the zlib tech notes, CRCs are used above the compression.

One bit of trivia about the program: qt-faststart does not take into account compressed moov atoms to begin with. I had always considered this a TODO item. However, it has occurred to me that compressed moov atoms probably only ever occur at the beginning of a file. At the very least, I have never received a complaint about qt-faststart being unable to process compressed moov atoms at the end of a file.

Electronic Lossless Viking Arts

I’m technically off the ffmpeg-devel list right now. A few days ago, my ISP started having trouble delivering email between my mail server and the list server. I’m still trying to resolve the problem. Wouldn’t you know, some interesting stuff has been going down:

Theora Superblock Traversal

Hands down, one of the hardest parts about the VP3/Theora coding method is the bizarre fragment traversal pattern. Most everything else that VP3 does is seen in other codecs. But the Hilbert pattern used for encoding and decoding coefficients creates a monster. Vector drawing program to the rescue!

Imagine a video frame with a resolution of 96×48, highly contrived for this example. The Y plane would consist of 12×6 fragments (8×8 block of samples). Meanwhile, the U and V planes would each consist of 6×3 fragments. Recall that VP3 employs a notion of a superblock, which encompasses a group of 4×4 fragments. If the fragment resolution of a plane is not divisible by 4, the plane must be conceptually padded out to the nearest superblock boundary for the traversal pattern.


click on the image for the full sized image
Theora superblock decoding pattern

The above drawing illustrates the order that the fragments are traversed in our hypothetical 96×48 frame when decoding coefficients. Note, in particular, the strange order in which fragments 50..53 are traversed.

I hope that’s clear enough. The challenge at hand is to establish data structures at the start of the encode or decode process. Generally, you will allocate an array of fragment data structures: Fragment indices 0, 1, 2, 3, etc. will proceed left -> right, then top to bottom. The second row of the Y plane begins at index 12, third row at index 24, and so on. But when it comes to decoding the DCT coefficients, it is necessary to traverse through the fragment data structure array at indices 0, 1, 13, 12, 24, 36, 37, 25, and so on. Thus, it is customary to build an array that maps one set of indices to the other. I’m having flashbacks just thinking about it. I remember developing a completely different algorithm than the official VP3 decoder when I reimplemented the decoder for FFmpeg.

Oh yeah, and remember that all VP3/Theora decoding is performed upside-down. I choose not to illustrate that fact in these drawings since this stuff is complicated enough.

Linus Is Still The Man

Linus Torvalds– a legendary figure who sat down one day and wrote an operating system. To many ordinary programmers like myself, he is a distant figurehead, difficult to comprehend. Every now and then, however, we catch a glimpse that helps us to humanize the mighty coder. And I don’t know about you, but I love a good knockdown, drag-em-out C vs. C++/Java/OOP flame war and this thread does not disappoint: Linus tells it like it is on the topic of C++.

Perhaps I’m too harsh on C++. In fact, there is one instance where I really appreciate the use of good, solid C++ coding– when a binary target that I wish to reverse engineer was originally authored in C++, compiled, and still has the mangled C++ symbols. gcc’s binutils do a fabulous job of recovering the original class and method names, as well as argument lists.

Sometimes I think I should get off my high horse with regards to C. After all, this article from May listed C programming as one of the top 10 dead or dying computer skills, right up there with Cobol and OS/2. This is not the first time that I have encountered such sentiment, that C is going the way of raw assembler. I think it’s all a conspiracy perpetrated by the computer book publishing industry. The C language simply does not move anywhere near as many books as the latest flavor of the month fad language.