Breaking Eggs And Making Omelettes

Topics On Multimedia Technology and Reverse Engineering


Archives:

Indeo 5 and Partial Bink in FFmpeg

February 11th, 2010 by Multimedia Mike

There have been some great additions to FFmpeg in recent weeks. Most notable is an Indeo 5 video decoder. Congratulations to everyone who worked hard to reverse engineer this codec that was used in quite a few video games. The sample I selected for a FATE test spec is called Educ_Movie_DeadlyForce.avi:


SWAT 3: Deadly force Indeo 5 video

The video is much funnier in its original context (though it’s no longer posted there). Thankfully, the math behind Indeo 5 is bit exact which allows me to enter a test spec right away.

While Indeo 5 was used in quite a few PC games through the years, no game-related format can touch Bink. FFmpeg now includes a Bink file demuxer. Further, FFmpeg now has decoders for both variations of Bink audio (designated DCT and RDFT), which can also occur in Smacker files.

So I added new FATE test specs to cover those new additions. I also went through the FATE test coverage wiki page and eliminated a bunch of low-hanging fruit. Sometimes, there were samples (some difficult to find) at the samples archive; other times, it was necessary to do a Google search for “filetype:<file extension>”. To give you an idea of the current trends in the shifting sands of the internet, such searches invariably seem to yield Facebook pages as their top hits.

These are the new FATE tests:

Michael has been at work fixing more formal H.264 conformance vectors. 2 new tests that reflect this work are h264-conformance-frext-frext_mmco4_sony_b and h264-conformance-frext-frext2_panasonic_b. Further, I am in the process of amending the ea-mad (now ea-mad-adpcm-ea-r1) test to use a sample that has EA R1 ADPCM in addition to EA Madcow video. The new sample is staged and I will update the spec to reflect that new sample when I activate the new specs.

Regarding the iff-ilbm test, I could only find one sample on the internet for that format. It’s a bit weird:


lms-matriks

It came from a demoscene archive. I wonder if this immortalized test vector is self-deprecating humor of one’s own demo group or slander of a rival demo group?

Posted in FATE Server | 17 Comments »

Split Personality Blogger

February 10th, 2010 by Multimedia Mike

I came across this Typealyzer web site which purports to assess a blogger’s personality type based purely on the written word. I have 3 active blogs and I apparently manage to write using a different personality type on each blog:

  • This blog — my personal technical blog — pegs me as “INTJ – The Scientists”.
  • My Gaming Pathology blog — where I write about usually obscure video games — marks me as “ESTP – The Doers”.
  • My corporate blog — where I speak in fairly careful terms about what I do at my day job — earns me the distinction of “ENTJ – The Executives”.

I suppose all of those make sense. Each blog is written with a slightly different tone. This is in keeping with the website’s explanation that “This is about exploring social roles (or personas) that are expected to be different in different situations.” I think it’s frustrating that I have to write my corporate blog in an executive, often vacuous tone (and I know it frustrates the readers to no end as well); I would much prefer if it could lean toward “The Scientists” end of the personality inventory. Alas, it is not to be.

I popped in a bunch of blogs I read but they all seem to learn toward certain areas of the brain chart. According to that chart, I don’t seem to read any blogs by people heavy in the sensing or feeling departments. I have a feeling that I wouldn’t be able to tolerate it. On a hunch, I plugged in the blog produced by the top Google search for “angsty teenager blog” — Teen Angst Poetry. That scores as “ISFP – The Artists”. Sure enough, I don’t think I would enjoy reading that blog.

Posted in General | 5 Comments »

Designing a Codec For Parallelized Entropy Decoding

February 9th, 2010 by Multimedia Mike

When I was banging my head hard on FFmpeg’s VP3/Theora VLC decoder, I kept wondering if there’s a better way to do it. Libtheora shows us the way in that there obviously must be, since it can decode so much faster than FFmpeg.

In the course of brainstorming better VLC approaches, I wondered about the concept of parallelizing the decoding operation somehow. This sounds awfully crazy, I know, and naturally, it’s intractable to directly parallelize VLC decoding since you can’t know where item (n+1) begins in the bitstream until you have decoded item (n). I thought it might be plausible to make a pass through the bitstream decoding all of the tokens and ancillary information into fixed lenth arrays and then make a second pass in order to sort out the information in some kind of parallelized manner. Due to the somewhat context-adaptive nature of Theora’s VLCs, I don’t know if I can make a go of that.

Why can’t VLC decoding be done in parallel, either through SIMD or thread-level parallelization? I think the answer is clearly, “because the codecs weren’t designed for it”. Here’s the pitch: Create a codec bitstream format where the frame header contains indices pointing into the bitstream. For example, Sorenson Video 1 encodes the Y, U, and V planes in order and could be effectively decoded in parallel if only the frame header encoded offsets to the start of the U and V planes. Similarly, if a Theora frame header encoded offsets to certain coefficient groups in the header, that could facilitate multithreaded coefficient VLC decoding. Some other modifications would need to occur in the encoder, e.g., guaranteeing that no end-of-block runs cross coefficient group boundaries as they often do now.

Taking this a step farther, think about arithmetic/range coding. Decoding a bitstream encoded with such an algorithm notoriously requires a multiplication per value decoded. How about designing the bitstream in such a way that 2 or 4 multiplications can be done in parallel through any number of SIMD instruction sets?

Do these ideas have any plausibility? I have this weird feeling that if they do, they’re probably already patented. If they aren’t, well, maybe this post can serve as prior art.

Posted in Outlandish Brainstorms | 2 Comments »

Profiling and Optimizing Theora

February 8th, 2010 by Multimedia Mike

Because I have a short memory, I wanted to write down some of the knowledge and wisdom I’ve collected in the past few months as I have been working on optimizing FFmpeg’s VP3/Theora decoder.

Profiling Methods
These are some of the general tools:
Read the rest of this entry »

Posted in VP3/Theora | 3 Comments »

30-hour Do-nothing Build

February 5th, 2010 by Multimedia Mike

I have a habit of prepending ‘time’ to all of my ‘make’ commands in order to keep a rough estimate of how long build jobs take.

Adhering to this custom, I performed a ‘make’ command on a project that didn’t actually require any rebuilding. So how does the following happen?

$ time make -j5

[...]

real    1770m35.893s
user    0m12.408s
sys     0m11.692s

Answer: The machine (virtual machine, actually) had just been started, had a grossly out-of-sync clock, and must have synced to the time server during that narrow window that the build was occurring:

make[2]: Warning: File `...' has modification time 1.8e+04 s in the future
make[2]: warning:  Clock skew detected.  Your build may be incomplete.

Posted in General | 1 Comment »

Security Memory

February 4th, 2010 by Multimedia Mike

I dug up this old security alert. It’s very dear to me in that I’m directly responsible for the security problem outlined. Whenever I feel like my work doesn’t matter, I just have to remind myself that I have written code that has become widespread enough that it warrants security notices. Many programmers likely go their whole career without making that kind of impact. (That kind of positive spin might be similar to not knowing or caring about the difference between positive and negative attention.)

For the curious, I wrote an AIFF demuxer (among many others) for the xine project. For some reason, I allocated a static buffer of 100 bytes on the stack and proceeded to read a number of bytes from user input, a number that was also determined by the same user input. Big no-no, and I really don’t know what I was thinking; hardcoded, arbitrary constants (char buffer[100]) aren’t usually my style. After that was found, I audited the rest of my demuxers for similar mistakes and found none. It may seem like this would only be a problem if a user directly loaded a malicious file into xine. However, since AIFF has a MIME type, and because there was a Mozilla plugin version of xine, it would have been possible to send a malicious AIFF through a web page.

The reason I was reflecting on this was due to a major security problem I found in FATE recently as I was investigating another problem. It has to do with the data logging script that receives FFmpeg build and test information from FATE clients. I’ll let my commit message to my private git repository tell the tale:

    Get rid of mind-boggling security hazard that actually prints out the
    user's actual hash key when passed an invalid hash. This was obviously
    added for debugging purposes and was only triggered if a user had access
    to insert data for a particular configuration.

If an attacker knew a valid username, the system would cheerfully reveal the corresponding hash key if the HMAC failed. Using this vector, an attacker could have polluted the FATE database with loads of bad data. Not a huge deal in the grand scheme of things. But given that this is the only attack that the system is trying to guard against, a total failure in context.

Honestly, sometimes I can’t believe people let me anywhere near a programming environment.

One last — and fascinating — note about that AIFF exploit: It was the result of an infamous university course (perhaps this one?) given by D. J. Bernstein in which students were required to find 10 security holes in open source software programs during the term. Reportedly, all of the students failed the class since none actually found 10 holes. I don’t know if the class was ever held again.

Posted in FATE Server, Programming | 1 Comment »

What’s So Hard About 0xA9?

February 3rd, 2010 by Multimedia Mike

No matter how much I think I know about about character encoding or trying to work around issues arising from the same, I’ll always get bitten.

For a long time, one particular FATE configuration has shown 87/127 tests succeeding, even though the total number of test specs has crept up over to 300. I investigated this errant configuration on the client side and concluded that it was, in fact, executing all of the tests and sending all the results over the server in one neat package. Apparently, the problem was on the server side. Since it was an older Intel C compiler configuration, I didn’t care about investigating much further.

At one point, some bad bit of code was checked into FFmpeg and all of the results started showing xy/127 tests succeeded. This made the issue a bit more pressing. Mans discovered that the problem had to do with the svq3 test spec failing. The bad code affecting the SVQ3 test was quickly fixed so I didn’t worry about it again until yesterday when, once again, FATE’s various configs were only reporting that 127 tests had been run.

Here’s what was happening: FATE stores the stderr output of a test only if the test spec fails. This is a key data point since everything is fine when the test is successful and FATE tosses the output. The sample used for the SVQ3 test outputs the following metadata (among other data) in the stderr (seen, for example, in this test result):

    copyright       : ? Vertical Online 2001
    copyright-eng   : ? Vertical Online 2001

Those mystery characters map to the byte 0xA9 which is the c-in-a-circle copyright symbol according to UTF tables I can find (at least, U+00A9 is). That byte is making the system choke somewhere along the line, which annoys me greatly. When the client-side Python script executes the test and stuffs the stdout and stderr into the SQLite database, the relevant field is supposed to be a blob– a binary large object. The receiving PHP script on the server is also supposed to honor that blob schema.

Mans’ solution is to specifically encode the stdout/stderr blobs as UTF8 strings in the client-side Python script. That fixes this problem. But I’m confused as to why this is necessary in the first place. Was the PHP script doing its best to interpret the data inside the blob and falling over? Or was the SQLite engine on the server confused by the 0xA9 character in the blob?

Also, I suddenly find myself wondering how the A9 search company got its name.

Update: Thanks for MichaelK. for pointing out the problem. While I was properly converting (since that’s necessary) stdout/stderr from build records to binary type, I never did the same for test result stdout/stderr. I had to do it for the build record output since that was compressed before going into the database. Since the test result data is “just strings”, or so I thought, no reason to do so.

Posted in FATE Server | 3 Comments »

Creative Nomad Zen Reflections

February 2nd, 2010 by Multimedia Mike

In the middle of 2004 I purchased a Creative Nomad Zen Xtra portable MP3 player. “MP3 player” was not quite a commonplace concept yet but the word “iPod” was beginning to catch on. When describing this new toy to people, I usually described it as “about the same as an iPod but about 1/2 the price” which was absolutely true when I purchased it.

Here is my Nomad compared to a 1st generation Apple iPod Touch, my current MP3 player (and more):


Creative Nomad Zen Xtra compared to Apple iPod Touch (1st gen)

The Nomad Zen Xtra served me well for 3 solid years until I finally got a proper iPod in summer of 2007. I have kept the unit around since then for no particular reason. I decided to disassemble and photograph it before I send the battery and electronics off to their respective recycling destinations.


Creative Nomad Zen Xtra with its front plate and battery removed

The Nomad Zen Xtra was highly user-servicable and upgradeable. At the time I put it out of service, the battery could barely run for 5 hours (whereas, 10-12 hours was no problem when it was new). A replacement battery would be easy enough to order from assorted battery shops on the internet.


Creative Nomad Zen Xtra with back plate and hard drive removed

Have more than 40 GB of songs? Take off the back of the unit, remove the 2.5″ 40 GB IDE HD and replace with a larger one. That never proved to be necessary for me; in fact, I soon realized after I bought it that the lower-end 30 GB model would have been well more than enough.

The 40 GB HD from the unit is still perfectly good. I decided to hook it up to a Linux computer and see if there is anything I could work out about the filesystem. Before I got too far into it, a little Googling led me to a Python utility called zenrecover.py. Works famously:

$ python zenrecover.py /dev/sdc songs /home/melanson/mnt/zen
0% 3.6MB/s "Bizet_Intermezzo_from_Carmen.mp3" (6.8MB)

Just for fun, I dumped all the songs from the unit. I discovered a few things I had long forgotten and had never made the transition into my iPod. Curiously, the very first items that the utility dumped (likely because they occupied the first parts of the filesystem) were a selection of classical tunes as played by the “Beijing Central Phil Orchestra”. These songs came with the unit. It’s notable that the software transferred them off because the packaged software did not allow the user to do so (I’m pretty sure it allowed all music that was downloaded to be transferred off).

Ugh, that packaged software had to be the worst part about the Nomad Zen Xtra. I know lots of users like to chastise iTunes over a range of pet peeves. I think such people have simply never been exposed to anything worse, like this software.

Posted in General | 11 Comments »

Multimedia Document Management System

February 1st, 2010 by Multimedia Mike

Someone recently updated a link in the MultimediaWiki page for mirrored documents. Naturally, that doesn’t automatically update the mirrored copy @ multimedia.cx (having me poll the page history and manually update the mirrored copy hardly counts as an automated process). I suddenly thought that it would be desirable to have a content management system that allows authorized users to upload and organize documents, particularly PDF documents which comprise many of these mirrored documents. Sort of a… document management system.

“Document Management System.” Sounds enterprise-y. Here’s what I want:

  • Free, open source solution in which I do not have to modify a single line of code
  • Allows me to create a list of users who have permissions to upload or delete PDF documents
  • Allows authorized users to upload or delete PDF documents
  • Manages at least a minimum of metadata

The key thing here is to allow authenticated users to upload and manage these mirrored documents. I know many will say, “Drupal/WordPress/MyFaveCMS can be coerced to do just that!” And I don’t dispute any such claims. It’s also true that nearly any program you need to write can be written in straight C, eschewing any higher level languages. I was just hoping for a more turnkey solution that doesn’t require me to learn a lot or do my own coding or customization.

I guess the problem here is that no one sets out to just write such a simple CMS. A CMS might start out simple but eventually grows into the next Drupal. I probably need to come to terms with that fact that there is no prepackaged solution that exactly fits this simple need without at least some tinkering.

Posted in General | 10 Comments »

Next Entries »