Category Archives: FATE Server

Belief In The Compiler

If you keep up with FATE as obsessively as I do, you may have noticed that I got Intel’s C compiler (icc) into the build farm. It was a struggle, but I finally made it happen. The compiler is distributed as an RPM. but the x86_32 build machine is Ubuntu. I googled and found a number of blog posts describing how to install it on Ubuntu. I went the route of using the alien program to convert the RPM to a DEB, installing it, manually modifying the ‘icc’ shell script to point to the correct INSTALLDIR and updating the ld configuration to point to the right libraries. Finally, I installed the free-for-non-commerial-use license file in one of the many acceptable locations and I was off.

When I first started fighting with icc about a month ago, the compiler was on version 10.1.008. It is now on 10.1.012, indicating that I may need to update it almost as frequently as the the SVN version of gcc used on the farm.

I also tried to get the x86_64 version of the compiler running on the appropriate build machine. When I try to run ‘icc’, I get one of the most annoying and confusing UNIX errors known to exist:

  -bash: /opt/intel/cce/10.1.012/bin/iccbin: No such file or directory

Even though I can see that file quite clearly exists.

But at least icc is running for x86_32, and FFmpeg is compiling fine and running the same series of tests as all the gcc versions. Personally, I have never put a lot of stock in the optimizing prowess of proprietary compilers. I have seen a few too many that need to have their optimizers disabled because they are so obviously buggy. However, icc demonstrates some clear speedups over gcc based on FATE testing. If you open a build record page for an icc run in one window or tab, and then open a build record page for a gcc run in another, you can see that the icc-built binary generally runs faster. This is particularly notable on longer tests.

This exercise also reminds me that the SVN versions of gcc build slow binaries, at least on x86_32. I am wondering if this has to do with the way I am building the compiler, or if 4.3 will actually build substantially slower binaries?

And yes, I plan one day to deploy an easier way of comparing build performance over time, and for various platforms.

FATE’s Ugly History

Tonight, I finally implemented and deployed a build history browsing option for the FATE Server. Yeah, FATE just got a little uglier and crowded tonight in the process. But if you visit the main page and see that one of the build boxes is red instead of its recommended green, you can browse the corresponding history to learn approximately which SVN revision broke the build. I say “approximately” because the commits can get stacked, e.g., a build/test cycle is occurring and 3 new changes are committed before the machine has a chance to check out code again and build. In that case, the history browser will at least provide links to each of the 3 commits in the online SVN browser which allows a developer to inspect each change message that went into a build.

I hope to implement a similar history browser for test results so that a developer can analyze which SVN commit broke a particular test.

With these history features, I think I am finally finishing with the most basic useful features. Too bad the website looks so bad with its retro-1995 feel. Unfortunately, I have roughly zero experience in making websites pretty. What I care more about is that the website is functional for the nominal FFmpeg developer. I’m disappointed with how crowded the main page is becoming. However, I have had plans from the beginning about how to distill that data down to a few essential pieces of information. Plus, I have ideas about how to make the front page orders of magnitude faster to load (hello, caching strategy). Though I might be the only person who really cares as I obsessively check the FATE status throughout the day.

Ambitious Testing Effort

When the FATE initiative went live, I asked the guru how to handle the H.264 conformance suite samples– should I just dump all of them into the database whether they were working or not, or should I only enter the samples that worked with the current version of FFmpeg? His answer was far more complicated than I could have anticipated:

  1. Enter all currently working samples
  2. If a particular H.264 conformance vector used to work with FFmpeg, add the sample and enter a new issue in the tracker
  3. Otherwise, don’t add the test yet

Whoa. As you know, I got task #1 accomplished relatively easily. Now I’m back to take on task #2.

Hypothesis: most of the code that can make or break the H.264 decoding process lives in files named libavcodec/h264*. Thus, test the sample suite against every single FFmpeg revision in which one of those files was touched.

  for file in libavcodec/h264*; do svn log $file; done | 
  grep "^r.*lines$" | 
  sed -e 's/^r\([0-9]*\).*$/\1/' | 
  sort -n | 
  uniq

That produces just over 400 different FFmpeg revisions that need testing. I had better get started early.

Algorithm outline:

  • create a script that takes the above revision list and the directory full of H.264 conformance vectors
  • create a list of standard test names based on the convention already in the database
  • query the database to obtain a complete list of all tests known to work currently
  • remove the working tests from the list of all tests
  • for each revision:
    • get the FFmpeg code corresponding to that revision
    • build FFmpeg, and use ccache to hopefully gain a little speedup in the process
    • test FFmpeg against all of the non-working samples, output results in a CSV format: “revision, 0, 0, 0, 1, 0, 0, 0, 0, 0,…”; this should facilitate analysis and serve to illustrate that the non-working samples have been broken from the get-go

Hey, computing cycles are cheap, right? Perhaps the same ambitious strategy can be employed as a one-time brute force method to learn when other FFmpeg components broke so that they can be fixed and subsequently tested via FATE. And there’s no reason I have to do this on my own; I know certain FFmpeg developers who like to brag about their cumulative 27 or so underworked CPU cores laying around their flats (you devs know who you are).

Growing Pains Of FATE

When I upgraded my web hosting plan last summer from 800 MB of online storage to 1/2 TB, I wondered what I could possibly use all that extra space for. The FATE Server is stepping up to the task and presently — somehow — occupies 1/2 GB of space. This is not a problem in and of itself since it’s only 1/1000 of my total allotment. However, it makes a regular, responsible backup schedule difficult to keep. I have toyed with the idea of hosting the database operation on my own hardware and bandwidth. I’m pretty sure I’m the primary user of this database anyway. Having the database under local authority would also likely allow for greater flexibility and configurability for the underlying engine.

As always, I have plans to add many, many, many more tests. There are various public MPEG conformance suites for different codecs, each consisting of tens or hundreds of samples. There is FFmpeg’s internal regression suite which ought to be run and verified for each build. By my accounting, ‘./ffmpeg_g’ is invoked over 300 times when running ‘make test’; I suspect each of those invocations would be a separate test in the database. Whenever I think of getting down to it and starting to enter individual test specs into the database using my custom PHP tool, I always step back, glance at the magnitude of the task, and instead start outlining a script that will automatically process the test series for me, and with fewer mistakes.

However, there are yet more problems. There are only 76 tests currently. Logging the 76 individual test result records nominally takes 10-15 seconds. Yes, I use one MySQL connection, but with 76 separate INSERT queries. It would probably be more efficient to concatenate them into one INSERT query with 76 records. However, it would probably be even better to parameterize the data, compress it, and POST it via HTTP to a custom CGI script on the server that could uncompress it and perform the INSERT locally and more efficiently, ideally. This would solve firewall problems and library problems as outlined in a previous post and allow for more diverse platform expansion in the future.

Finally, I also need to be able to test tests before deploying them. That’s right — test tests. I.e., enter a new test, or a series of new tests, into a staging area and be able to run a special script to verify that I got all the basic details right such as sample paths and FFmpeg command line parameters. None of this nonsense about dumping in a new test spec and waiting until the next SVN commit to see if I got it all correct. Or worse yet, artifically starting a new build/test cycle with a document update SVN commit. Out of all the problems examined in this post, this should be the easiest to take care of.

Thanks for putting up with yet another edition of Multimedia Mike’s research notepad.