Monthly Archives: August 2011

Further SMC Encoding Work

Sometimes, when I don’t feel like doing anything else, I look at that Apple SMC video encoder again.

8-bit Encoding
When I last worked on the encoder, I couldn’t get the 8-color mode working correctly, even though the similar 2- and 4-color modes were working fine. I chalked the problem up to the extreme weirdness in the packing method unique to the 8-color mode. Remarkably, I had that logic correct the first time around. The real problem turned out to be with the 8-color cache and it was due to the vagaries of 64-bit math in C. Bit shifting an unsigned 8-bit quantity implicitly results in a signed 32-bit quantity, or so I discovered.

Anyway, the 8-color encoding works correctly, thus shaving a few more bytes off the encoding size.

Encoding Scheme Oddities
The next step is to encode runs of data. This is where I noticed some algorithmic oddities in the scheme that I never really noticed before. There are 1-, 2-, 4-, 8-, and 16-color modes. Each mode allows encoding from 1-256 blocks of that same encoding. For example, the byte sequence:

  0x62 0x45

Specifies that the next 3 4×4 blocks are encoded with single-color mode (of byte 0x62, high nibble is encoding mode and low nibble is count-1 blocks) and the palette color to be used is 0x45. Further, opcode 0x70 is the same except the following byte allows for specifying more than 16 (i.e., up to 256) blocks shall be encoded in the same matter. In light of this repeat functionality being built into the rendering opcodes, I’m puzzled by the existence of the repeat block opcodes. There are opcodes to repeat the prior block up to 256 times, and there are opcodes to repeat the prior pair of blocks up to 256 times.

So my quandary is: What would the repeat opcodes be used for? I hacked the FFmpeg / Libav SMC decoder to output a histogram of which opcodes are used. The repeat pair opcodes are never seen. However, the single-repeat opcodes are used a few times.

Puzzle Solved?
I’m glad I wrote this post. Just as I was about to hit “Publish”, I think I figured it out. I haven’t mentioned the skip opcodes yet– there are opcodes that specify that 1-256 4×4 blocks are unchanged from the previous frame. Conceivably, a block could be unchanged from the previous frame and then repeated 1-256 times from there.

That’s something I hadn’t thought of up to this point for my proposed algorithm and will require a little more work.

Further reading

Basic Video Palette Conversion

How do you take a 24-bit RGB image and convert it to an 8-bit paletted image for the purpose of compression using a codec that requires 8-bit input images? Seems simple enough and that’s what I’m tackling in this post.

Ask FFmpeg/Libav To Do It
Ideally, FFmpeg / Libav should be able to handle this automatically. Indeed, FFmpeg used to be able to, at least at the time I wrote this post about ZMBV and was unhappy with FFmpeg’s default results. Somewhere along the line, FFmpeg and Libav lost the ability to do this. I suspect it got removed during some swscale refactoring.

Still, there’s no telling if the old system would have computed palettes correctly for QuickTime files.

Distance Approach
When I started writing my SMC video encoder, I needed to convert RGB (from PNG files) to PAL8 colorspace. The path of least resistance was to match the pixels in the input image to the default 256-color palette that QuickTime assumes (and is hardcoded into FFmpeg/Libav).

How to perform the matching? Find the palette entry that is closest to a given input pixel, where “closest” is the minimum distance as computed by the usual distance formula (square root of the sum of the squares of the diffs of all the components).



That means for each pixel in an image, check the pixel against 256 palette entries (early termination is possible if an acceptable threshold is met). As you might imagine, this can be a bit time-consuming. I wondered about a faster approach…

Lookup Table
Continue reading

Metal Gear Solid VP3 Easter Egg

Metal Gear Solid: The Twin Snakes for the Nintendo GameCube is very heavy on the cutscenes. Most of them are animated in real-time but there are a bunch of clips — normally of a more photo-realistic nature — that the developers needed to compress using a conventional video codec. What did they decide to use for this task? On2 VP3 (forerunner of Theora) in a custom transport format. This is only the second game I have seen in the wild that uses pure On2 VP3 (first was a horse game). Reimar and I sorted out most of the details sometime ago. I sat down today and wrote a FFmpeg / Libav demuxer for the format, mostly to prove to myself that I still could.

Things went pretty smoothly. We suspected that there was an integer field that indicated the frame rate, but 18 fps is a bit strange. I kept fixating on a header field that read 0x41F00000. Where have I seen that number before? Oh, of course — it’s the number 30.0 expressed as an IEEE 32-bit float. The 4XM format pulled the same trick.

Hexadecimal Easter Egg
I know I finished the game years ago but I really can’t recall any of the clips present in the samples directory. The file mgs1-60.vp3 contains a computer screen granting the player access and illustrates this with a hexdump. It looks something like this:



Funny, there are only 22 bytes on a line when there should be 32 according to the offsets. But, leave it to me to try to figure out what the file type is, regardless. I squinted and copied the first 22 bytes into a file:

 1F 8B 08 00   85 E2 17 38   00 03 EC 3A   0D 78 54 D5
 38 00 03 EC   3A 0D 

And the answer to the big question:

$ file mgsfile
mgsfile: gzip compressed data, from Unix, last modified: Wed Oct 27 22:43:33 1999

A gzip’d file from 1999. I don’t know why I find this stuff so interesting, but I do. I guess it’s no more and less strange than writing playback systems like this.