RAR Is Still A Contender

RAR (Roshal ARchive) is still a popular format in some corners of the internet. In fact, I procured a set of nearly 1500 RAR files that I want to use in a little project. But I didn’t want my program to have to operate directly on the RAR files which meant that I would need to recompress them to another format. Surely, one of the usual lossless compressors commonplace with Linux these days would perform better. Probably not gzip. Maybe not bzip2 either. Perhaps xz, though?

Conclusion
At first, I concluded that xz beat RAR on every single file in the corpus. But then I studied the comparison again and realized it wasn’t quite apples to apples. So I designed a new experiment.

New conclusion: RAR still beats xz on every sample in this corpus (for the record, the data could be described as executable program data mixed with reduced quality PCM audio samples).

Methodology
My experiment involved first reprocessing the archive files into a new resource archive file format and only compressing that file (rather than a set of files) using gzip, bzip2, xz, and rar at the maximum compression settings.

echo filesize,gzip,bzip2,xz,rar,filename > compressed-sizes.csv
for f in `ls /path/to/files/*`
do
  gzip -9 --stdout $f > out.gz
  bzip2 -9 --stdout $f > out.bz2
  xz -9 --stdout --check=crc32 $f > out.xz
  rar a -m5 out.rar $f
  stat --printf "%s," $f out.gz out.bz2 out.rar out.xz >> compressed-sizes.csv
  echo $f >> compressed-sizes.csv
  rm -f out.gz out.bz2 out.xz out.rar
done

Note that xz gets the option '--check=crc32' since I’m using the XZ Embedded library which requires it. It really doesn’t make a huge different in filesize.

Experimental Results
The preceding command line generates compressed-sizes.csv which goes into a Google Spreadsheet (export as CSV).

Here are the full results of the bake-off, graphed:



That’s not especially useful. Here are the top 2 contenders compared directly:



Action
Obviously, I’m unmoved by the data. There is no way I’m leaving these files in their RAR form for this project, marginal space and bandwidth savings be darned. There are other trade-offs in play here. I know there is free source code available for decompressing RAR files but the license wouldn’t mesh well with GPL source code libraries that form the core of the same project. Plus, the XZ Embedded code is already integrated and painstakingly debugged.

During this little exercise, I learned of a little site called Maximum Compression which takes experiments like the foregoing to their logical conclusion by comparing over 200 compression programs on a standard data corpus. According to the site’s summary page, there’s a library called PAQ8PX which posts the best overall scores.

9 thoughts on “RAR Is Still A Contender

  1. Kostya

    1) RAR can employ two different compression methods: LZ77-derived (with optional multimedia preprocessing too!) and PPM variant. XZ and such is just LZMA (LZ77-derived with arithmetic compression).

    2) I’d pick .xz also for another reason – decompression speed is much higher for it as I remember. Disk space is cheap nowadays anyway.

    I’d actually compare it all to lossless audio codecs :)
    Everybody writes its own compression scheme based on limited set of known methods complete with own container(archive) format. And we have things like FLAC (slow compression, fast decompression – like deflate or LZMA) and like APE (slow compression, slow decompression – most of modeling methods like in PAQ).

  2. kurosu

    From the spreadsheet, the data seems still to be compressible, so it may not be a matter of better handling the incompressible parts, as could be the case in LZ77 vs LZMA2.

    On the contrary, some files have their size divided by more than 10 after compression, which makes them very compressible. In such a situation (mostly with text files), I observed that PPMd algorithm in 7zip is better than LZMA.

    Also, 7zip includes a BCJ transform improving executable compression. I don’t know if it is included in the LZMA SDK (provided by 7zip and on which xz relies), but I would think 7zip performs better. I would however agree it is not as ubiquitous, although most linux distros provide an implementation.

    But at least that would explain some lacks of xz over rar (and 7zip).

  3. Jim Leonard

    Rar supports a “solid” mode which treats everything as one large stream to maximize dictionary hits, and sorts by content. Try running your tests again with -s and see how much that improves things.

    The Windows version has content filtering (ie. PCM and WAV files are compressed by their deltas and not absolute sample values) so you might want to try a further test with that. I don’t know if the unix variants have that.

    I still use rar for my personal archiving because it tacks on parity data that can be used to reconstruct damaged bits. I know that I can use par2 with 7zip archives but that’s an extra step that I don’t want to be bothered with.

    7-zip/xz can DRASTICALLY outperform rar if no content-type filtering is required and you bump up the dictionary drastically. I’ve seen real improvements going from a 64M dictionary to a 1G dictionary (although this requires 10G of RAM during compression, and also requires that your audience has 1G of RAM minimum available for decompression). Rar is limited to a 4M dictionary for backward compatibility reasons, something I wish they would address.

Comments are closed.