Breaking Eggs And Making Omelettes

Topics On Multimedia Technology and Reverse Engineering


Lagarith And MSU

April 24th, 2006 by Multimedia Mike

While raiding Wikipedia for scant pieces of multimedia information they might have that are not yet in the MultimediaWiki, I learned of 2 new lossless video codecs– Lagarith and MSU Lossless Video. I’ve heard people grumble about how lossless video codecs just don’t perform. I know of one grumbler in particular — you know who you are — who claims that FLAC actually outperforms the nominal special-purpose lossless video codec. I am expecting a full report from you on how these new codecs stack up. The MSU technology is proprietary but I have started to document the GPL’d Lagarith codec in the Wiki. The author contributed a write-up of the surface details. I am trying to plod through the control flow. It’s a bit slow-going though since many of the crucial functions are written in MMX-optimized ASM.

In other Wiki news, I finally found a logo:

MultimediaWiki Logo

It is the same one seen on Wikipedia’s multimedia entry and apparently comes from a set of KDE icons made here. I like it.

Posted in General | 12 Comments »

12 Responses

  1. Steve Says:

    Previously I had stumbled onto the MSU Codec from here:

    Another one from the forum is one called LZOcodec (apparently because it uses LZO)

    You may also find some additional ones mentioned here:

  2. Jim Leonard Says:

    I tend to stay away from any codec that doesn’t have an official website and/or contact details in the distribution, so I haven’t tried many of the lossless codecs that have cropped up on doom9 over the years.

    I’m currently working with analog captures of digital animation from the 1990s (the next Mindcandy DVD project), and I’ve cleaned up still scenes by using the previous frame until the scene changes. I’ve also normalized so that +/-5 of 0,0,0 gets clipped to 0,0,0 (same with 255,255,255). I tell you this because that’s the source material I used to test Lagarith and MSU. Here’s my initial impressions, in no order:

    Lagarith is fantastic for my source material: It compresses almost realtime, and encodes a null frame if the previous frame doesn’t change (perfect for my optimizations, and also for screen captures and emulator output). Took at 16GB source file (uncompressed) and got it down to 2.6GB. (HuffYUV did 5.2G.)

    MSU compressed very well due to motion prediction and encoding only the deltas between frames. Only problem is, it encoded my test material 5x realtime. But it got the file down to 2.1G.

    Playback performance in Premiere Pro (my target application):

    Raw: Plays realtime (barely — the data rate is 30MB/s)

    HuffYUV: Plays realtime.

    Lagarith: Plays slowly (non-realtime).

    MUS: Crashed Premiere Pro :-( Played slowly (non-realtime) in VirtualDub.

    So for me, the choice is clearly Lagarith.

  3. Multimedia Mike Says:

    Jim: MSU encoded it 5x realtime? That’s bad? Or did you mean 1/5 realtime? 5 seconds to encode 1 second of video frames?

  4. pengvado Says:

    MSU – agreed, it’s rather slow but good compression ratio.
    Lagarith – meh, almost as slow as ffv1, and only a little better compression than ffvhuff. Sure, neither ffv1 nor ffvhuff have null frames, but that’s what -vf decimate is for.

  5. loki Says:

    dunno, but in my eyes MSU was a big looser. but that could have been cause of the source video material.
    i once coded a very simple lossless video encoder, which specially worked perfect for encoding screen captures.
    my results then where:

    1. LLV 0.5b ——————————- 779.178 Bytes
    2. corePNG (best) ——————— 2.029.568 Bytes
    3. MSU 0.5.8b (lossless setting) —– 2.574.336 Bytes
    4. LCL 2.23 zlib (best) —————- 7.583.744 Bytes
    5. LCL 2.23 mszh ——————— 21.291.008 Bytes
    6. FFmpeg FFV1 ———————- 24.403.968 Bytes
    7. huffYUV 2.11 (best, RGB) ——- 155.256.832 Bytes

    (LLV is what i called my codec).

    plus: there is only 1 real lossless setting (at least this was with version 0.5.8b then), and 3 pseudo-lossless settings….

    and it was 3 times slower than my codec, which was slow as hell already :(

  6. Jim Leonard Says:

    Sorry, yes, I meant it took 5 times as long as real time to encode (5x slower).

    Loki: Is your codec available?

  7. loki Says:

    @Jim: not really. the only thing ever published was a demo EXE player with an example video. main problem is that the bitstream format is not completely frozen yet, maybe it will be integrated into some codec framework which i am working on.

    if someone wants to have a look, the demo can be found here:

  8. Alex Says:

    Just had a look at your frames.llv, and at the first sight at offset 8, a bzip2 compressed block begins :)

  9. Alex Says:

    At the second:

    dd if=frames.llv of=frames.bz2 bs=8 skip=1
    bzip2recover frames.bz2

    This created small bzip2 files, possible all the frames (and deltas):
    6061 2006-04-27 13:12 rec00001aaa.bz2
    48 2006-04-27 13:12 rec00002aaa.bz2
    48 2006-04-27 13:12 rec00003aaa.bz2
    216 2006-04-27 13:12 rec00004aaa.bz2
    245 2006-04-27 13:12 rec00005aaa.bz2

  10. loki Says:

    lol. i already said, it is not very complex… but still it compresses very good on special input video like screen captures of an application, like the demo video. this is quite obvious, but all other codecs i compared with this video did worse in resulting filesize.

  11. System25 Says:

    Lossless compression is very difficult and the performance is poor, but the development and research in this field must continue if we want to archive a good codec that rivalizes with lossy ones. (Maybe some day …)

    And Loki, you can not use bzip2 in a codec nowadays because is too slow. (And it is too slow because of the BWT [Boorrows Wheeler Transformation]).

  12. Multimedia Mike Says:

    I don’t think it’s fair to discount the BWT entirely for multimedia compression. Sure, the forward transform is much slower than the inverse tranform. But look at vector quantizer algorithms which have traditionally been highly asymmetric (they can take a very long time to encode but be very quick to decode).