The year was 1999, the month was May, and Newsweek (an American weekly news periodical) had a cover feature entitled “What You’ll Want Next”. The cover prominently featured a Sega Dreamcast controller (the console was slated for U.S. release a few months later). One of the features in the issue had an illustration of a future home and all the technological marvels that would arrive in coming years. I scanned the pictures and always wanted to write something about the prognostications contained within, some of which seemed a tad outlandish.
I never got around to it at then (plus, I had no good place to publish it). But look at the time– it’s 10 years later already! And I still have the page scans laying around, having survived moves to at least a 1/2 dozen “main desktop computers” over the intervening decade. So let’s have a look at where we were supposed to be by now.
Click for larger images
A Really Smart House
The home of the future will be loaded with appliances that talk to the Internet — and to each other. A high-speed Net connection links to set-top boxes and PCs; devices — from reading tablets to washing machine — are connected through a local wireless network. Though pervasive and powerful, the technology isn’t intrusive.
Continue reading →
Thanks to Måns for contributing FATE build/test results for the MIPS64 architecture, courtesy of his Gdium. This is in addition to his existing contributions for ARM, Alpha, and AVR32 architectures.
The Continuous Integration Maturity Model is a play on the Capability Maturity Model, something covered in software engineering curricula and then never seen again in the world of professional software development unless you happen to work for the U.S. government. But I digress.
The CIMM comes from a blog that is very concerned with continuous integration, perhaps because their business is CI software. The post has an ad-hoc table that lists various properties deemed to be worthwhile in CI systems. The table was apparently assembled by a committee of people meeting at a CI conference (who knew there was such a thing, or that I would actually have a reason to care?).
Aside: CIMM might not be the best acronym they could have used. CIMM already stands for the humorous Capability Immaturity Model.
The table has some interesting ideas, so let’s evaluate how FATE is doing using this frame of reference:
Continue reading →
Obviously, I have more than enough FATE-related work to keep my free time filled for the foreseeable future. But that doesn’t stop me from coming up with more ideas for completely revising the underlying architecture. And it’s always good to hash these ideas out on this blog since it: 1) helps me clarify the issues; 2) allows other people to jump in with improvements and alternatives; 3) allows me to put as much thought into these ideas as possible. Let’s face it– whatever design decisions I make for FATE are the ones the team tends to be stuck with for a long time.
People who dig into FATE and the various commands it executes in order to build and test FFmpeg often ask why I only perform singly-threaded builds, i.e., why not build with ‘make -j2’ on a dual-core machine? It would be faster, right? Well, yes, but only for the build phase. The test phase (which usually takes longer) is still highly serial (though the ‘make test’ regression suite can also be parallelized). A pragmatic reason I have for not wanting to multi-thread the build is that the stdout/stderr text lines can easily get jumbled which makes it more difficult to diagnose failures.
I do, however, put both cores on the main dual-core FATE machine to use– I run 2 separate installations of the FATE script, thus divvying the labor by having each core handle roughly half of the configurations. Thus far, one installation runs the x86_64 configs while the other installation is a 32-bit chroot environment running the x86_32 configs.
Can I come up with a better parallelization system? I think I might be able to. And to what end? Taking this whole operation to the next level, where “next level” is defined loosely as getting a few hundred more tests into the database while perhaps upgrading to a faster machine with more than 2 cores which is responsible for more than just native machine builds. Also, I am experimenting with moving the PowerPC builds to a faster (x86) machine for building. Better support for cross compiling and remote testing is driving some of this refactoring.
Continue reading →