Breaking Eggs And Making Omelettes

Topics On Multimedia Technology and Reverse Engineering


Archives:

Science Into Engineering

January 20th, 2009 by Multimedia Mike

I modified my distributed RPC test staging utility to implement my imprecise audio testing idea. This is the output under typical conditions:

There was 1 unique stdout blob collected
all successful configurations agreed on this stdout blob:
pass

So, it worked. Yeah, I’m surprised too. That result means that all the configurations (20 total) produce an audio waveform in which no individual PCM sample deviates from the reference wave by more than 1. Since I had to choose some configuration to generate the reference sample, I used Linux / x86_32 / gcc 2.95.3.

BTW, this is the general Python algorithm I am using to compare the waves. It takes a full minute, give or take a second, to compare 2 33MB samples:

I replaced abs() with a branch to check if the diff is < -1 or > 1, but that didn’t improve speed measurably. I think the constant unpacking might have something to do with it. Better solutions welcome. (By comparison, performing a comparison using ‘cmp’ of 2 identical files that have the same size as the test above, living on a network share, takes less than 2 seconds.)

For a 10-second sample of a .m4a stereo AAC file (882,000 samples), these are the number of PCM samples that deviated by 1 (first number), and by more than 1 (second number). You will notice that no samples deviated by more than 1, which was my hypothesis at the start, and the basis on which I devised this plan:

Mac OS X / PPC / gcc 4.0.1
432691, 0

Linux / x86_32 / icc
238, 0

Linux / x86_32 / gcc 2.95.3
0, 0

Linux / PPC / gcc 4.0.4
Linux / PPC / gcc 4.1.2
Linux / PPC / gcc 4.2.4
Linux / PPC / gcc 4.3.2
Linux / PPC / gcc svn
432701, 0

Linux / x86_64 / gcc 4.0.4
Linux / x86_64 / gcc 4.1.2
Linux / x86_64 / gcc 4.2.4
Linux / x86_64 / gcc 4.3.2
Linux / x86_64 / gcc svn
248, 0

Linux / x86_32 / gcc 3.4.6
Linux / x86_32 / gcc 4.0.4
Linux / x86_32 / gcc 4.1.2
Linux / x86_32 / gcc 4.2.4
Linux / x86_32 / gcc 4.3.2
Linux / x86_32 / gcc svn
237, 0

Mac OS X / x86_64 / gcc 4.0.1
244, 0

I have thrown RealAudio Cooker and 28.8 samples at this, and both work. I am still testing this against some more audio samples to ensure that this idea holds water.

Posted in FATE Server, Python | 12 Comments »

12 Responses

  1. jpc Says:

    You may want to try numpy. AFAIK it can read arbitrary binary data (I’m unsure about arbitrary endianess) and do the difference and > comparision in one step. Summing the resulting boolean array is also doable AFAIR. (each of this operations is implemented as tight loop in C so the performance should be better)

  2. compn Says:

    did you feed it arbitrary (or just a different audio sample) to make sure your +1/-1 checker is working?

  3. Multimedia Mike Says:

    That’s a darn good question, compn. I just did a quick experiment to verify:

    $ dd if=/dev/urandom of=file1 count=1024
    1024+0 records in
    1024+0 records out
    524288 bytes transferred in 0.085481 secs (6133392 bytes/sec)

    $ dd if=/dev/urandom of=file2 count=1024
    1024+0 records in
    1024+0 records out
    524288 bytes transferred in 0.091987 secs (5699595 bytes/sec)

    $ diff-audio.py file1 file2
    total deviations of 1 = 0
    total deviations of more than 1 = 262132

  4. Vitor Says:

    I can imagine two reasons it is slower than cmp. The first one is that you are reading two bytes at a time, which is slow (and could be even slower though NFS). I think it is better to read a > 1kb block at a time.

    The second thing is that you are doing 16 byte comparison, which I guess is not the fastest one in x86. Maybe it is better to compare int32 and only when they differ unpack them into int16 to find the difference.

  5. Multimedia Mike Says:

    I load the entire reference file into memory before comparing, so I’m not making the mistake of loading byte-by-byte over a network share (SMB in this case). Actually, when I ran the cmp test, I deliberately put the files on a network share to slow down the operation (and hopefully avoid caching); doing the 2x33MB compare from the local filesystem was just too fast to reasonably measure (less than a second). There might have been caching at work with local files, though it’s useful to note that I repeatedly performed the same diff-audio experiment with local files with no measurable timing difference.

    That’s an interesting optimization with the multi-tiered comparison. I’ll profile that.

  6. Adam Ehlers Nyholm Thomsen Says:

    If you decide to use numpy beware that abs(-2^15)==-2^15 for their int16 type, which might give unexpected results, however this only occurs with really large differences and given the general speed of numpy can be easily added by specifying (abs(array)<0).sum() as a test.

  7. Adam Ehlers Nyholm Thomsen Says:

    Once again my comment was eaten by your spamfilter, so I’ll try to summarise my much earlier comment, from 1 hour ago. I tried running a barebones python loop of 33*1024*512 iterations and it took around 7 seconds on my machine whereas your test script took 40 seconds on two 33 MB files. A test scripts based on numpy took around 1 second on the same 33 MB files. The numpy script was:
    import numpy
    file1 = numpy.fromfile(open(‘/tmp/file1’, ‘rb’), dtype=numpy.int16)
    file2 = numpy.fromfile(open(‘/tmp/file2’, ‘rb’), dtype=numpy.int16)
    res=abs(file1-file2)
    print (res==0).sum(), (res==1).sum(), (res>1).sum(), (res<0).sum()

  8. Adam Ehlers Nyholm Thomsen Says:

    Btw numpy handles endianness gracefully as you are able to specify a string such as ‘<i2’ as the dtype parameter which gives in order, endianness, datatype/signedness, datasize(in bytes).

  9. Multimedia Mike Says:

    Thanks for the tips on numpy, Adam and jpc. I have been checking the documentation. Also: is numpy part of the standard Python library? If so, starting from which version? I am a little wary of using external Python libraries because of concerns that they might not be compilable everywhere that Python is available. As an impromptu scan:

    Ubuntu 8.10 with Python 2.5.2 doesn’t respond to “import numpy” but “import _numpy” does work (I found a module called _numby.so in the filesystem.

    Mac OS X with Python 2.5.1 responds to “import numpy”.

    A Red Hat Enterprise Linux system with Python 2.3.4 doesn’t respond to any numpy imports.

  10. Multimedia Mike Says:

    Wow! numpy is exceptional, at least compared to the straight-up Python method above. The comparisons run too fast to really measure, which is good enough for this purpose. Thanks.

  11. Cd-MaN Says:

    Cool solution. I was thinking about throwing together a solution based on FFT (with numpy :-)), but it is very nice that you’ve seemed to found “the simplest solution which works”.

    Best regards.

  12. jpc Says:

    numpy is not standard but is AFAIK widely used so you should not have many problems with compiling it (even for different platforms). And it is actively maintained so you can count on getting some help.