If there is one meager accomplishment I think I can claim in the realm of open source multimedia, it would be as the point-man on palette support in xine, MPlayer, and FFmpeg.
Problem statement: Many multimedia formats — typically older formats — need to deal with color palettes alongside compressed video. There are generally three situations arising from paletted video codecs:
- The palette is encoded in the video codec’s data stream. This makes palette handling easy since the media player does not need to care about ferrying special data between layers. Examples: Autodesk FLIC and Westwood VQA.
- The palette is part of the transport container’s header data. Generally, a modular media player will need to communicate the palette from the file demuxer layer to the video decoder layer via an out-of-band/extradata channel provided by the program’s architecture. Examples: QuickTime files containing Apple Animation (RLE) or Apple Video (SMC) data.
- The palette is stored separately from the video data and must be transported between the demuxer and the video decoder. However, the palette could potentially change at any time during playback. This can provide a challenge if the media player is designed with the assumption that a palette would only occur at initialization. Examples: AVI files containing paletted video data (such as MS RLE) and Wing Commander III MVE.
Transporting the palette from the demuxer layer to the decoder layer is not the only be part of the battle. In some applications, such as FFmpeg, the palette data also needs to travel from the decoder layer to the video output layer, the part that creates a final video frame to either be displayed or converted. This used to cause a problem for the multithreaded ffplay component of FFmpeg. The original mechanism (that I put into place) was not thread-safe– palette changes ended up occurring sooner than they were supposed to. The primary ffmpeg command line conversion tool is single-threaded so it does not have the same problem. xine is multi-threaded but does not suffer from the ffplay problem because all data sent from the video decoder layer to the video output layer must be in a YUV format, thus paletted images are converted before leaving the layer. I’m not sure about MPlayer these days, but when I implemented a paletted format (FLIC), I rendered the data in higher bit depths in the decoder layer. I would be interested to know if MPlayer’s video output layer can handle palettes directly these days.
I hope this has been educational from a practical multimedia hacking perspective.
I’ve done it three ways too:
1) Patch to AVI demuxer to handle palette change block (I suspect that no other OSS player supports it).
2) Demuxing palette with video frame (Smacker, for example).
3) Decoding palette as part of video frame and rendering onto higher bitdepth (ZMBV) or not (VB, KMVC).
And MPlayer now handles palettized video (Smacker comes to mind again).
Is case #2 actually significantly different from a codec that transmits decoding tables in a header (e.g. VQ codebooks, huffman tables, etc.)? It seems like you’d have to have a general mechanism for that case anyway.
@Sean: It might seem that way. However, consider the case of Id CIN (Quake II) where the demuxer has to send over both a palette and a bunch of Huffman codes (64 kilobytes worth) to the decoder.
Certainly, it’s not an impossible, or even a difficult problem. But it’s just one more accommodation that must be made when “architecting” (or just throwing together) a generalized media player.