Actually, I need 80 optimal Huffman trees, but let’s take it one step at a time.
The VP3 video codec — the basis of Theora — employs 80 different Huffman trees. There are 16 for DC coefficients and 16 each for 4 different AC coefficient groups. An individual VP3/Theora frame gets to select 4 different Huffman trees: one for Y-plane DC, one for C-plane DC, one for Y-plane AC, and one for C-plane AC. VP3 hardcodes these tables. Theora allows more flexibility and an encoder is free to either use the default VP3 trees or create its own set and encode them into the header of the container (typical an Ogg file).
Generating an optimal Huffman tree for a particular set of input is rather well established; any introduction to Huffman codes covers that much. What I’m curious about is how one would go about creating a set of, e.g., 16 optimal Huffman trees for a given input. The first solution that comes to mind is to treat this as a vector quantization (VQ) problem. I have no idea if this idea holds water, or if it even has any sane basis in mathematics, but when has that ever stopped me from running with a brainstorm?
Here’s the pitch:
- Modify FFmpeg’s VP3/Theora decoder to print after each frame decode the count of each type of token that was decoded from the stream (for each of the 5 coefficient groups, and for each of the plane types), as well as the number of bits that token was encoded with. This will allow tallying of the actual number of bits used for encoding tokens in each frame.
- Create a separate tool to process the data by applying a basic VQ codebook training algorithm. It will be necessary to treat all of the Y-plane AC tokens as single vectors and do the same with the C-plane AC tokens, even though each AC token vector needs to be comprised of 4 separate AC group vectors. Re-use some existing E/LGB code for this step.
- Generate Huffman trees from the resulting vectors and count the number of bits per token for each.
- Iterate through the frequency vectors captured from the first step and match them to the codebooks using a standard distance algorithm.
- Tally the bits from using the new vectors and see if there is any improvement versus the default vectors (Huffman tables).
I don’t know if I’ll have time to get around to trying this experiment in the near future but I wanted to throw it out there anyway. With all of the improvments that the new Theora encoder brings to the tables, it seems that the custom Huffman trees feature is one that is left un-exercised per my reading of the documentation and source code. From studying the Big Buck Bunny Theora encodes (my standard Theora test vectors these days), I see that they use the default VP3 tables. The 1080p variant occupied 866 MB. Could there be any notable space savings from generating custom Huffman tables? Or was this a pointless feature to add to the Theora spec?