Category Archives: Video Codecs

Articles about video codec technology, from a programming-oriented perspective

Special QuickTime Features

I processed some more unknown samples today, the ones that came from last month’s big Picsearch score. I found some interesting QuickTime specimens. One of them was filed under video codec FourCC ‘fire’. The sample only contained one frame of type fire and that frame was very small (238 bytes) and looked to contain a number of small sub-atoms. Since the sample had a .mov extension, I decided to check it out in Apple’s QuickTime Player. It played fine, and you can see the result on the new fire page I made in the MultimediaWiki. Apparently, it’s built into QuickTime. The file also features a single frame of RPZA video data. My guess is that the logo on display is encoded with RPZA while the fire block defines parameters for a fire animation.

Moving right along, I got to another set of QuickTime samples that were filed under ‘gain’ video codec. This appears to be another meta-codec and this is what it looks like in action:


Apple QuickTime Player using the gain/fade feature

I decided to post this pretty screenshot here since I didn’t feel like creating another Wiki page for what I perceive to be not a “real” video codec. The foregoing CumulusQuickTimeSlideshow.mov sample comes from here and actually contains 5 separate trak atoms: 2 define ‘jpeg’ data, 1 is ‘gain’, 1 is ‘dslv’ and the last is ‘text’, which defines ASCII strings containing the filenames on the bottom of the slideshow. I have no idea what the dslv atom is for, but something, somewhere in the file defines whether this so-called alpha gain effect will use a cross fade (as seen with the Cumulus shapes) or if it will use an Iris transitional effect (as seen in the sample na_visit03.mov here).

So much about the QuickTime format remains a mystery.

Video Coding Concepts: YUV and RGB Colorspaces And Pixel Formats

If you have any experience in programming computer graphics, you probably know all about red/green/blue (RGB) video modes and pixel formats. Guess what? It is all useless now that you are working on video codec technology!

No, that’s not entirely true. Some video codecs operate on RGB video natively. A majority of modern codecs use some kind of YUV colorspace. We will get to that. Since many programmers are familiar with RGB pixel formats, let’s use that as a starting point.

RGB Colors

To review, computers generally display RGB pixels. These pixels have red (R), green (G), and blue (B) components to them. Here are the various combinations of R, G, and B components at their minimum (0) and maximum (255/0xFF) values:

R G B color notes:
0x00 0x00 0x00 absence of R, G, and B = full black
0x00 0x00 0xFF full blue
0x00 0xFF 0x00 full green
0x00 0xFF 0xFF
0xFF 0x00 0x00 full red
0xFF 0x00 0xFF
0xFF 0xFF 0x00
0xFF 0xFF 0xFF full R, G, and B combine to make full white

YUV Colors
If you are used to dealing with RGB colors, YUV will seem a bit unintuitive at first. What does YUV stand for? Nothing you would guess. It turns out Y stands for intensity. U stands for blue and V stands for red. U is also denoted as Cb and V is also denoted as Cr. So YUV is sometimes written as YCbCr.

Here are the various combinations of Y, U, and V components at their minimum (0) and maximum (255/0xFF) values:

Y U/
Cb
V/
Cr
color notes
0x00 0x00 0x00
0x00 0x00 0xFF
0x00 0xFF 0x00
0x00 0xFF 0xFF
0xFF 0x00 0x00 full green
0xFF 0x00 0xFF
0xFF 0xFF 0x00
0xFF 0xFF 0xFF
0x00 0x80 0x80 full black
0x80 0x80 0x80
0xFF 0x80 0x80 full white

So, all minimum and all maximum components do not generate intuitive (read: similar to RGB) results. In fact, all 0s in the YUV colorspace result in a dull green rather than black. That last point is useful to understand when a video is displaying a lot of green block errors– that probably means that the decoder is skipping blocks of data completely and leaving the underlying YUV data as all 0.

Further Reading:

VQ Case Study: Textures

Per my understanding, a lot of 3D hardware operates by allowing the programmer to specify a set of vertices between which the graphics chip draws lines. Then, the programmer can specify that a bitmap needs to be plotted between some of those lines. In 3D graphics parlance, those bitmaps are called textures. More textures make a game prettier, but a graphics card only has so much memory for storing these textures. In order to stretch the video RAM budget, some graphics cards allow for compressing textures using vector quantization.

A specific example of VQ in 3D graphics hardware is the Sega Dreamcast with its PowerVR2 graphics hardware. Textures can be specified in a number of pixel formats including, but not limited to, RGB555, RGB565, and VQ. In the VQ mode, a 256-entry vector codebook is initialized somewhere in video RAM. Each vector is 8 bytes large and specifies a 2×2 block of pixels in either RGB555 or RGB565 (can’t remember which, or it might be configurable). For the texture in video RAM that is specified as VQ, each byte is actually an index into the codebook. Instant 8:1 compression, notwithstanding the 2048-byte codebook overhead which can be negligible depending on how many textures leverage the codebook and how large those textures are.

VQ Case Study: RoQ

RoQ was first developed for the FMV-based adventure game The 11th hour and was later adopted by Id for the Quake III engine and derivative games.

RoQ operates in a YUV 4:2:0 space. However, it was developed for a game released in the late 1994/early 1995 timeframe. Back then, cutting edge video was 640×480 at 256 colors or maybe 64K colors. and it was not feasible to take a large video frame and convert the entire thing from YUV -> RGB 30, 24, or even 15 times per second. However, RoQ’s design solved some of these problems.

Continue reading