Category Archives: Codec Technology

Special QuickTime Features

I processed some more unknown samples today, the ones that came from last month’s big Picsearch score. I found some interesting QuickTime specimens. One of them was filed under video codec FourCC ‘fire’. The sample only contained one frame of type fire and that frame was very small (238 bytes) and looked to contain a number of small sub-atoms. Since the sample had a .mov extension, I decided to check it out in Apple’s QuickTime Player. It played fine, and you can see the result on the new fire page I made in the MultimediaWiki. Apparently, it’s built into QuickTime. The file also features a single frame of RPZA video data. My guess is that the logo on display is encoded with RPZA while the fire block defines parameters for a fire animation.

Moving right along, I got to another set of QuickTime samples that were filed under ‘gain’ video codec. This appears to be another meta-codec and this is what it looks like in action:

Apple QuickTime Player using the gain/fade feature

I decided to post this pretty screenshot here since I didn’t feel like creating another Wiki page for what I perceive to be not a “real” video codec. The foregoing sample comes from here and actually contains 5 separate trak atoms: 2 define ‘jpeg’ data, 1 is ‘gain’, 1 is ‘dslv’ and the last is ‘text’, which defines ASCII strings containing the filenames on the bottom of the slideshow. I have no idea what the dslv atom is for, but something, somewhere in the file defines whether this so-called alpha gain effect will use a cross fade (as seen with the Cumulus shapes) or if it will use an Iris transitional effect (as seen in the sample here).

So much about the QuickTime format remains a mystery.

Video Coding Concepts: YUV and RGB Colorspaces And Pixel Formats

If you have any experience in programming computer graphics, you probably know all about red/green/blue (RGB) video modes and pixel formats. Guess what? It is all useless now that you are working on video codec technology!

No, that’s not entirely true. Some video codecs operate on RGB video natively. A majority of modern codecs use some kind of YUV colorspace. We will get to that. Since many programmers are familiar with RGB pixel formats, let’s use that as a starting point.

RGB Colors

To review, computers generally display RGB pixels. These pixels have red (R), green (G), and blue (B) components to them. Here are the various combinations of R, G, and B components at their minimum (0) and maximum (255/0xFF) values:

R G B color notes:
0x00 0x00 0x00 absence of R, G, and B = full black
0x00 0x00 0xFF full blue
0x00 0xFF 0x00 full green
0x00 0xFF 0xFF
0xFF 0x00 0x00 full red
0xFF 0x00 0xFF
0xFF 0xFF 0x00
0xFF 0xFF 0xFF full R, G, and B combine to make full white

YUV Colors
If you are used to dealing with RGB colors, YUV will seem a bit unintuitive at first. What does YUV stand for? Nothing you would guess. It turns out Y stands for intensity. U stands for blue and V stands for red. U is also denoted as Cb and V is also denoted as Cr. So YUV is sometimes written as YCbCr.

Here are the various combinations of Y, U, and V components at their minimum (0) and maximum (255/0xFF) values:

Y U/
color notes
0x00 0x00 0x00
0x00 0x00 0xFF
0x00 0xFF 0x00
0x00 0xFF 0xFF
0xFF 0x00 0x00 full green
0xFF 0x00 0xFF
0xFF 0xFF 0x00
0xFF 0xFF 0xFF
0x00 0x80 0x80 full black
0x80 0x80 0x80
0xFF 0x80 0x80 full white

So, all minimum and all maximum components do not generate intuitive (read: similar to RGB) results. In fact, all 0s in the YUV colorspace result in a dull green rather than black. That last point is useful to understand when a video is displaying a lot of green block errors– that probably means that the decoder is skipping blocks of data completely and leaving the underlying YUV data as all 0.

Further Reading:

Palette Communication

If there is one meager accomplishment I think I can claim in the realm of open source multimedia, it would be as the point-man on palette support in xine, MPlayer, and FFmpeg.

Palette icon

Problem statement: Many multimedia formats — typically older formats — need to deal with color palettes alongside compressed video. There are generally three situations arising from paletted video codecs:

  1. The palette is encoded in the video codec’s data stream. This makes palette handling easy since the media player does not need to care about ferrying special data between layers. Examples: Autodesk FLIC and Westwood VQA.
  2. The palette is part of the transport container’s header data. Generally, a modular media player will need to communicate the palette from the file demuxer layer to the video decoder layer via an out-of-band/extradata channel provided by the program’s architecture. Examples: QuickTime files containing Apple Animation (RLE) or Apple Video (SMC) data.
  3. The palette is stored separately from the video data and must be transported between the demuxer and the video decoder. However, the palette could potentially change at any time during playback. This can provide a challenge if the media player is designed with the assumption that a palette would only occur at initialization. Examples: AVI files containing paletted video data (such as MS RLE) and Wing Commander III MVE.

Transporting the palette from the demuxer layer to the decoder layer is not the only be part of the battle. In some applications, such as FFmpeg, the palette data also needs to travel from the decoder layer to the video output layer, the part that creates a final video frame to either be displayed or converted. This used to cause a problem for the multithreaded ffplay component of FFmpeg. The original mechanism (that I put into place) was not thread-safe– palette changes ended up occurring sooner than they were supposed to. The primary ffmpeg command line conversion tool is single-threaded so it does not have the same problem. xine is multi-threaded but does not suffer from the ffplay problem because all data sent from the video decoder layer to the video output layer must be in a YUV format, thus paletted images are converted before leaving the layer. I’m not sure about MPlayer these days, but when I implemented a paletted format (FLIC), I rendered the data in higher bit depths in the decoder layer. I would be interested to know if MPlayer’s video output layer can handle palettes directly these days.

I hope this has been educational from a practical multimedia hacking perspective.

VQ Case Study: Textures

Per my understanding, a lot of 3D hardware operates by allowing the programmer to specify a set of vertices between which the graphics chip draws lines. Then, the programmer can specify that a bitmap needs to be plotted between some of those lines. In 3D graphics parlance, those bitmaps are called textures. More textures make a game prettier, but a graphics card only has so much memory for storing these textures. In order to stretch the video RAM budget, some graphics cards allow for compressing textures using vector quantization.

A specific example of VQ in 3D graphics hardware is the Sega Dreamcast with its PowerVR2 graphics hardware. Textures can be specified in a number of pixel formats including, but not limited to, RGB555, RGB565, and VQ. In the VQ mode, a 256-entry vector codebook is initialized somewhere in video RAM. Each vector is 8 bytes large and specifies a 2×2 block of pixels in either RGB555 or RGB565 (can’t remember which, or it might be configurable). For the texture in video RAM that is specified as VQ, each byte is actually an index into the codebook. Instant 8:1 compression, notwithstanding the 2048-byte codebook overhead which can be negligible depending on how many textures leverage the codebook and how large those textures are.