Category Archives: FATE Server

Using gcovr with FFmpeg

When I started investigating code coverage tools to analyze FFmpeg, I knew there had to be an easier way to do what I was trying to do (obtain code coverage statistics on a macro level for the entire project). I was hoping there was a way to ask the GNU gcov tool to do this directly. John K informed me in the comments of a tool called gcovr. Like my tool from the previous post, gcovr is a Python script that aggregates data collected by gcov. gcovr proves to be a little more competent than my tool.

Results
Here is the spreadsheet of results, reflecting FATE code coverage as of this writing. All FFmpeg source files are on the same sheet this time, including header files, sorted by percent covered (ascending), then total lines (descending).

Methodology
I wasn’t easily able to work with the default output from the gcovr tool. So I modified it into a tool called gcovr-csv which creates data that spreadsheets can digest more easily.

  • Build FFmpeg using the '-fprofile-arcs -ftest-coverage' in both the extra cflags and extra ldflags configuration options
  • 'make'
  • 'make fate'
  • From build directory: 'gcovr-csv > output.csv'
  • Massage the data a bit, deleting information about system header files (assuming you don’t care how much of /usr/include/stdlib.h is covered — 66%, BTW)

Leftovers
I became aware of some spreadsheet limitations thanks to this tool: Continue reading

Bye Bye FATE Machine

This is the computer that performed the lion’s share of FATE cycles for the past 1.5 years before Mans put a new continuous integration system into service. I’ve now decided to let the machine go. I can’t get over how odd this feels since this thing is technically the best machine I own.



It’s a small form factor Shuttle PC (SD37P2 v2); Core 2 Duo 2.13 GHz; 2 GB RAM; 400 GB SATA HD; equipped with the only consistently functional optical drive in my house (uh oh). I used it as my primary desktop from March 2007 – November 2008, at which point I repurposed it for FATE cycles.

As mentioned, the craziest part is that this is technically the best computer in my house. My new EeePC 1201PN isn’t at quite the same level; my old EeePC can’t touch it, of course; the Mac Mini has a little more RAM but doesn’t stack up in nearly all other areas. But the Shuttle just isn’t seeing that much use since the usurpation. I had it running automated backup duty for multimedia.cx but that’s easy enough to move to another, lower-powered system.

Maybe the prognosticators are correct and the PC industry has matured to the point where raw computing power simply doesn’t matter anymore. I fancy myself as someone who knows how to put CPU power to work but even I don’t know what to do with the computing capacity I purchased over 3 years ago.

Where will the Shuttle go? A good home, I trust– I know a family that just arrived in the country and could use a computer.

ANSI FATE

The new FATE server is shaping up well. I think most of the old configurations have been migrated to the new server. I see one new compiler for x86_64– PathScale. It’s not faring particularly well at this point.

New Tests
As I write this, I noticed that there are now an even 700 tests, twice as many as the last time I trumpeted such a milestone. (It should be noted that the new FATE system finally breaks down the master regression suite into individual tests.) Thankfully, it’s no longer necessary to wait for me to create or edit tests (anyone with FFmpeg privileges can do this), nor is it necessary to keep up with this blog to know exactly what tests are new. Now, you can simply inspect the file history on tests/fate.mak and tests/fate2.mak (I think these 2 files are going to merge in the near future).

Vitor, as of r24865: “Add FATE test for ANSI/ASCII animation and TTY demuxer.” Eh? What’s this about? I admit I was completely removed from FFmpeg development for much of June and July so I could have missed a lot. Fortunately, I can check the file history to see which lines were added to make this test happen. And if FATE is exercising the test, you know exactly where the samples will live. Here’s this new decoder in action on the relevant sample:



The file history fingers Suxen drol/Peter Ross for this handiwork. I might have guessed– the only person who is arguably more enamored with old, weird formats than even I. Now we wait for the day that YouTube has support for this format. I’m sure there are huge archives of these animations out there (and I wager that Trixter and Jason Scott know where).



It’s an animation — it just keeps going

Meanwhile, the FATE suite now encompasses a bunch of perceptual audio formats, thanks to the 1-off testing method and a few other techniques. These formats include Bink audio, WMA Pro, WMA voice, Vorbis, ATRAC1, ATRAC3, MS-GSM, AC3, E-AC3, NellyMoser, TrueSpeech, Intel Music Coder, QDM2, RealAudio Cooker, QCELP (just going down the source control log here), and others, no doubt.

Then there’s this curious tidbit: “Add FATE test for WMV8 DRM”. The test spec is "fate-wmv8-drm: CMD = framecrc -cryptokey 137381538c84c068111902a59c5cf6c340247c39 -i $(SAMPLES)/wmv8/wmv_drm.wmv -an". I would still like to investigate FFmpeg’s cryptographic capabilities, which I suspect are moving in a direction to function as a complete SSL stack one day.

New Platforms
As for new platforms, the new FATE system finally allows testing on OS/2 (remember that classic? It was “the totally cool way to run your computer”). Thanks to Dave Yeo for taking this on.

Further, a new MIPS-based platform recently appeared on the FATE list. This one reports itself as running on 74kf CPU. Googling for this processor quickly brings up Mans’ post about the Popcorn Hour device. So, congratulations to him for getting the mundane box to serve a higher purpose. Perhaps one day, I’ll be able to do the same for that Belco Alpha-400 netbook.

FFmpeg and Code Coverage Tools

Code coverage tools likely occupy the same niche as profiling tools: Tools that you’re supposed to use somewhere during the software engineering process but probably never quite get around to it, usually because you’re too busy adding features or fixing bugs. But there may come a day when you wish to learn how much of your code is actually being exercised in normal production use. For example, the team charged with continuously testing the FFmpeg project, would be curious to know how much code is being exercised, especially since many of the FATE test specs explicitly claim to be “exercising XYZ subsystem”.

The primary GNU code coverage tool is called gcov and is probably already on your GNU-based development system. I set out to determine how much FFmpeg source code is exercised while running the full FATE suite. I ran into some problems when trying to use gcov on a project-wide scale. I spackled around those holes with some very ad-hoc solutions. I’m sure I was just overlooking some more obvious solutions about which you all will be happy to enlighten me.

Results
I’ve learned to cut to the chase earlier in blog posts (results first, methods second). With that, here are the results I produced from this experiment. This Google spreadsheet contains 3 sheets: The first contains code coverage stats for a bunch of FFmpeg C files sorted first by percent coverage (ascending), then by number of lines (descending), thus highlighting which files have the most uncovered code (ffserver.c currently tops that chart). The second sheet has files for which no stats were generated. The third sheet has “problems”. These files were rejected by my ad-hoc script.

Here’s a link to the data in CSV if you want to play with it yourself.

Using gcov with FFmpeg Continue reading