Monthly Archives: March 2009

icc vs. gcc Smackdown, Round 3

How did I become the benchmark peon? Oh right, I actually dared to put forth some solid benchmarks and called for suggestions for possible improvements to the benchmark methodology. This is what I get.

Doing these benchmarks per all the suggestions I have received is time-consuming and error-prone. But if you know anything about me by now, you should know that I like automating time-consuming and error-prone tasks. This problem is looking more and more like a nail, so allow me to apply my new favorite hammer: Python!

Here’s the pitch: Write a Python script that iterates through a sequence of compiler configurations, each with its own path and unique cflags, and compiles FFmpeg. For each resulting build, decode a long movie twice, tracking the execution time in milliseconds. Also, for good measure, follow Reimar’s advice and validate that the builds are doing the right thing. To this end, transcode the first 10 seconds of the movie to a separate, unique file for later inspection. After each iteration, write the results to a CSV file for graphing.

And here’s the graph:


icc vs. gcc smackdown, round 3

Look at that! gcc 4.3.2 still isn’t a contender but gcc 4.4-svn is putting up a fight.

Here are the precise details of this run:

  • Movie file is the same as before: 104-minute AVI; ISO MPEG-4 part 2 video (a.k.a. DivX/XviD) at 512×224, 24 fps; 32 kbps, 48 kHz MP3
  • This experiment includes gcc 4.4.0-svn, revision 143046, built on 2009-01-03 (I’m a bit behind)
  • All validations passed
  • Machine is a Core 2 Duo, 2.13 GHz
  • All 8 configurations are compiled with –disable-amd3dnow –disable-amd3dnowext –disable-mmx –disable-mmx2 –disable-sse –disable-ssse3 –disable-yasm
  • icc configuration compiled with –cpu=core2 –parallel
  • gcc 4.3.2 and 4.4.0-svn configurations compiled with -march=core2 -mtune=core2
  • all other gcc versions compiled with no special options

See Also:

What’s in store for round 4? It sure would be nice to get icc 11.0 series working on my machine for once to see if it can do any better. And since I have the benchmark framework, it would be nice to stuff LLVM in there to see how it stacks up. I would also like to see how the various builds perform when decoding H.264/AAC. The problem with that is the tremendous memory leak that slows execution to a crawl during a lengthy transcode. Of course I would be willing to entertain any suggestions you have for compiler options in the next round.

Better yet, perhaps you would like to try out the framework yourself. As is my custom, I like to publish my ad-hoc Python scripts here on my blog or else I might never be able to find them again.

Continue reading

Knowing Too Much

I heard an old, familiar song on the radio this morning. But something about it was off, and I knew what. I found myself yelling at the radio, “Use a higher bitrate!” For you see, the chorus of the song exhibited something that sounded like the notorious “underwater” artifact in MP3 when encoding with too low a bitrate.

I remember first hearing perhaps 10 years ago that radio stations were starting to move all of their music to MP3 (prior to that, I remember hearing that some would have a stack of about 10 CD players with music queued up; who really knows? And I’m sure varying radio stations use different equipment and setups). I just assumed that a radio station would use the highest bitrate possible. Perhaps this particular encoding was a leftover from when the radio station first moved to MP3 (the song itself was from 1995), when they assigned an intern to use some shareware encoder that was only capable of 96 kbps MP3.

I know I can’t be the only multimedia geek who gets frustrated at seeing sub-optimality deployed in the world at large. I remember staying at a hotel during Christmas of 2000 (the same year I was just starting to study multimedia) where the in-hotel movie preview system through the TV displayed horrible blocking artifacts. At the time, I only vaguely understood what could have been going on.

FATE On DOS

FATE cycles coming from DOS? Sure, why not? As long as someone is willing to do the work to maintain such a machine and contribute continuous build/test data for FFmpeg, let’s let them. That’s why I designed FATE with its distributed model.

Meet Michael K. He has set up a DOSEMU/FreeDOS session on a modern machine that has all the amenities of a typical Unix system, including gcc provided by DJGPP. Python 2.5+ must also be available and that’s all that’s really needed to run FATE (that and TCP/IP networking, somehow).

And now I really need to get some kind of front page revision rolled out. I happen to know some people are getting annoyed at having to sift through the, ahem, less relevant platforms at the top to get down to the real meat.

Escape From HappyFaceLand

I delved deep into my personal programming archives and was reminded of the brief stint I served at my dream job as a video game developer. The game I worked on was entitled Escape From HappyFaceLand. The stories you have no doubt heard about the game industry are true– Ridiculous hours, breakneck development cycles, struggles with arcane gaming technology, haphazard coding just to get something that barely works in time to meet an artificial deadline, only to be promptly forgotten about, even though the game is an unqualified success at release time.

Continue reading