Continuous Integration Maturity Model

The Continuous Integration Maturity Model is a play on the Capability Maturity Model, something covered in software engineering curricula and then never seen again in the world of professional software development unless you happen to work for the U.S. government. But I digress.

The CIMM comes from a blog that is very concerned with continuous integration, perhaps because their business is CI software. The post has an ad-hoc table that lists various properties deemed to be worthwhile in CI systems. The table was apparently assembled by a committee of people meeting at a CI conference (who knew there was such a thing, or that I would actually have a reason to care?).

Aside: CIMM might not be the best acronym they could have used. CIMM already stands for the humorous Capability Immaturity Model.

The table has some interesting ideas, so let’s evaluate how FATE is doing using this frame of reference:


  • Using source control: Check
  • Nightly builds: Not exactly, just the regular builds whenever new code is detected in source control
  • Current issues tracked & knowledge base: Check


  • Builds triggered on commit: Check, for the most part; FATE’s distributed architecture, where individual build clients are often behind firewalls, does not really allow for the server to connect to clients to order builds. It has to suffice for the clients to poll source control often enough.
  • Automated deploy to dev: Not sure what this means.
  • Unit tests on every build: Check
  • Generate change log: What do you suppose this means? Create text files with ‘svn diff’?
  • Collect code coverage data: Check
  • Code integrity checks: Proposed
  • Static analysis used: Proposed
  • Run-time analysis: Proposed
  • Automated API documentation generation: Proposed


  • Automated deployments to testing environments: This probably doesn’t apply to a utility like FFmpeg; it’s probably more applicable to, e.g., a web service.
  • On demand deployments to controlled environments: Same as previous
  • Per env. smoke test: Smoke tests run on each testing / controlled environment?
  • Manual test results in CI server: Manual test results are likely different than the automated tests that FATE already performs. There really isn’t any functionality that needs manual testing (not counting ffplay here).
  • Business visibility, reports: Probably moot for FFmpeg
  • Flag a CI build as Release Candidate (promotion): This is a great idea that I think Reimar has suggested
  • Product activity metrics: This is pretty open-ended and could mean a lot of things
  • Auto-update defect tracking: Does this mean having FATE interact with the Roundup issue tracker when something breaks? Ouch
  • Pre-commit builds: I imagine this means allowing developers to commit to private branches and ask the CI system to run a test build to make sure everything checks out. A lot of CI systems seem to support it and I could probably implement it for FATE as well. I don’t think the main tree is in such a state of flux that necessitates such a strategy.
  • Data roll-up: No idea what this means
  • Automatic cleanup of old data: I’m sure I could do this if I had old data to dispose of


  • Automated func. testing: Does func. stand for functional? Not sure how that would differ from all of the other testing types
  • Multi-threaded / scalable build systems: FATE sort of supports this via multiple client instances on a multi-core machine. But I have proposed some more scalable and efficient solutions.
  • Change reporting / SQA impacts: Not sure if this is applicable to FFmpeg, especially since I seem to be the primary SQA already.
  • Defect trending: This is an interesting idea to consider
  • Identify problem code from metrics: This is also interesting, but I really don’t have any ideas of how to make a go of it for FATE
  • Auto-deploy to prod: N/A for FFmpeg, similar to the deployments to testing / controlled environments from the intermediate phase.
  • Auto-rollback in prod: Same as previous
  • Environment monitoring: Same as previous
  • Alerts based on build metric thresholds: This sounds like it pertains to sending out emails when the build breaks. That doesn’t sound right, though. That kind of thing should really be at the novice level. Either way, that’s still in “proposed” status for FATE.
  • Security scans: This is probably pertinent to web services. The idea of doing fuzz testing is something I would group with “Run-time analysis” from the Novice stage.


  • Continuous deployment to prod: I suspect this refers to web services.

So a good number of these seem to assume that the program being testing is a something like a web service. They might come in handy if I ever make a new FATE installation for the FATE software itself.