Author Archives: Multimedia Mike

Continuous Integration Maturity Model

The Continuous Integration Maturity Model is a play on the Capability Maturity Model, something covered in software engineering curricula and then never seen again in the world of professional software development unless you happen to work for the U.S. government. But I digress.

The CIMM comes from a blog that is very concerned with continuous integration, perhaps because their business is CI software. The post has an ad-hoc table that lists various properties deemed to be worthwhile in CI systems. The table was apparently assembled by a committee of people meeting at a CI conference (who knew there was such a thing, or that I would actually have a reason to care?).

Aside: CIMM might not be the best acronym they could have used. CIMM already stands for the humorous Capability Immaturity Model.

The table has some interesting ideas, so let’s evaluate how FATE is doing using this frame of reference:

Continue reading

Better Parallelization And Scalability

Obviously, I have more than enough FATE-related work to keep my free time filled for the foreseeable future. But that doesn’t stop me from coming up with more ideas for completely revising the underlying architecture. And it’s always good to hash these ideas out on this blog since it: 1) helps me clarify the issues; 2) allows other people to jump in with improvements and alternatives; 3) allows me to put as much thought into these ideas as possible. Let’s face it– whatever design decisions I make for FATE are the ones the team tends to be stuck with for a long time.


Parallel FATE

People who dig into FATE and the various commands it executes in order to build and test FFmpeg often ask why I only perform singly-threaded builds, i.e., why not build with ‘make -j2’ on a dual-core machine? It would be faster, right? Well, yes, but only for the build phase. The test phase (which usually takes longer) is still highly serial (though the ‘make test’ regression suite can also be parallelized). A pragmatic reason I have for not wanting to multi-thread the build is that the stdout/stderr text lines can easily get jumbled which makes it more difficult to diagnose failures.

I do, however, put both cores on the main dual-core FATE machine to use– I run 2 separate installations of the FATE script, thus divvying the labor by having each core handle roughly half of the configurations. Thus far, one installation runs the x86_64 configs while the other installation is a 32-bit chroot environment running the x86_32 configs.

Can I come up with a better parallelization system? I think I might be able to. And to what end? Taking this whole operation to the next level, where “next level” is defined loosely as getting a few hundred more tests into the database while perhaps upgrading to a faster machine with more than 2 cores which is responsible for more than just native machine builds. Also, I am experimenting with moving the PowerPC builds to a faster (x86) machine for building. Better support for cross compiling and remote testing is driving some of this refactoring.

Continue reading

Cloudy Outlook

I don’t get this whole cloud computing thing, and believe me, I have been trying to understand it. Traditionally, I have paid little attention to emerging technology fads; if a fad sticks, then I might take the time to care. But I’m being a little more proactive with this one.


Obligatory cloud art

From what I have been able to sort out, the idea is that your data (the important part of your everyday computer work), lives on some server “out there”, in the “cloud” that is the internet. Veteran internet geeks like myself don’t find this to be particularly revolutionary. This is the essence of IMAP, at the very least, a protocol whose RFC is over 2 decades old. Cloud computing seems to be about extending the same paradigm to lots of different kinds of work, presumably with office-type documents (word processing documents, spreadsheets, and databases) leading the pack.

How is this all supposed to work? Intuitively, I wonder about security and data ownership issues. I don’t think we’re supposed to ask such questions. Every description I can find of cloud computing does a lot of hand-waving and asserts that everything will “just work”.

I had one computer science professor in college lecture that said “a bad idea is still a bad idea no matter how much money you throw at it.” I don’t yet know if this is a bad idea. But it’s definitely a big buzzword. I have been reading that Ubuntu is launching some kind of cloud service and a distribution of Linux that is integrated with said service. One part of this (or perhaps both parts) is called “Cloud One”.


Sony Micro Vault Tiny vs. quarter

My personal version of the computing cloud is a microscopic yet ridiculously high density USB flash drive, something I have only recently discovered and grown accustomed to (I told you that I’m often behind the technological curve). I tend to bring it with me nearly everywhere now. When I analyze it in the context of the cloud, I worry about the security and redundancy matters. I.e., I should probably have an easy, periodic backup process in place at home. Also, I should use some kind of encrypted filesystem for good measure (EncFS over FUSE should fit the bill and operate over whatever filesystem is in place).

Benjamin Otte has recently posted the most cogent use case of (what might be) cloud computing. One aspect of his vision is that his desktop settings are the same no matter which computer he logs into. I can’t deny that that would be nice. I have long forgotten what it’s like to customize and personalize my desktop environment. This is because I work on so many different computers and virtualized sessions that it would simply be too much trouble to make the changes everywhere. I don’t see how my flash stick solution would be able to help in such a situation (though that’s not outside the realm of possibility). But I’m also not convinced that the cloud approach is the ideal solution either.

Then again, it’s not really up to me. I suspect it will be largely up to the marketers.

Followup: