Monthly Archives: May 2009

Cloudy Outlook

I don’t get this whole cloud computing thing, and believe me, I have been trying to understand it. Traditionally, I have paid little attention to emerging technology fads; if a fad sticks, then I might take the time to care. But I’m being a little more proactive with this one.


Obligatory cloud art

From what I have been able to sort out, the idea is that your data (the important part of your everyday computer work), lives on some server “out there”, in the “cloud” that is the internet. Veteran internet geeks like myself don’t find this to be particularly revolutionary. This is the essence of IMAP, at the very least, a protocol whose RFC is over 2 decades old. Cloud computing seems to be about extending the same paradigm to lots of different kinds of work, presumably with office-type documents (word processing documents, spreadsheets, and databases) leading the pack.

How is this all supposed to work? Intuitively, I wonder about security and data ownership issues. I don’t think we’re supposed to ask such questions. Every description I can find of cloud computing does a lot of hand-waving and asserts that everything will “just work”.

I had one computer science professor in college lecture that said “a bad idea is still a bad idea no matter how much money you throw at it.” I don’t yet know if this is a bad idea. But it’s definitely a big buzzword. I have been reading that Ubuntu is launching some kind of cloud service and a distribution of Linux that is integrated with said service. One part of this (or perhaps both parts) is called “Cloud One”.


Sony Micro Vault Tiny vs. quarter

My personal version of the computing cloud is a microscopic yet ridiculously high density USB flash drive, something I have only recently discovered and grown accustomed to (I told you that I’m often behind the technological curve). I tend to bring it with me nearly everywhere now. When I analyze it in the context of the cloud, I worry about the security and redundancy matters. I.e., I should probably have an easy, periodic backup process in place at home. Also, I should use some kind of encrypted filesystem for good measure (EncFS over FUSE should fit the bill and operate over whatever filesystem is in place).

Benjamin Otte has recently posted the most cogent use case of (what might be) cloud computing. One aspect of his vision is that his desktop settings are the same no matter which computer he logs into. I can’t deny that that would be nice. I have long forgotten what it’s like to customize and personalize my desktop environment. This is because I work on so many different computers and virtualized sessions that it would simply be too much trouble to make the changes everywhere. I don’t see how my flash stick solution would be able to help in such a situation (though that’s not outside the realm of possibility). But I’m also not convinced that the cloud approach is the ideal solution either.

Then again, it’s not really up to me. I suspect it will be largely up to the marketers.

Followup:

LLVM Recognition

I did a bunch with compilers in FATE tonight:

  • I formally inducted x86_32 and x86_64 configurations of LLVM for Linux into the FATE farm. I’ve heard that compiler is going to rescue us all someday. For the time being, it is exhibiting some problems with the FFmpeg source code, particularly on 32-bit.
  • I upgraded the gcc used for compiling the Mac OS X versions to gcc 4.0.1 build 5493, packaged with Xcode 3.1.3 which is the latest version per my understanding. I think this is the first time I have bothered to upgrade Xcode since the first time I installed it. Everything still runs smoothly there, both on 32- and 64-bit.
  • I updated the gcc-svn snapshots across x86_32, x86_64, and PowerPC, all on Linux. All 3 configurations are compiling now. That’s the good news. Regrettably, the PowerPC build is doing even worse now (i.e., failing more tests) than the recently released gcc 4.4.0.

Weighted Moving Averages

Everyone would like to see FFmpeg performance graphs based on data collected from FATE. One day, I got down to analyzing the millisecond runtime data for one configuration regarding one of the longer tests in the suite. The numbers were all over the map and directly graphing them would prove confusing and meaningless.

Then I had an idea which draws on my limited experience with digital signal processing as it pertains to multimedia. What about plotting points that represent the average of point n along with, say, points (n-15)..(n+15)? Then I refined the idea a bit– how about multiplying each point by a factor such that point n gets the most consideration while points further away from n receive progressively less consideration?

And then I quickly covered up all evidence of the brainstorm for fear of catching a beatdown from a tough gang of statisticians for devising the most idiotic idea in the history of statistics. Imagine my surprise when I was recently reading up on a completely different topic and found the exact same techniques described in practical application. It turns out they even have formal names: Moving averages, in which various forms of weighted moving averages comprise a sub-category.

This is not the first time I have re-invented something that I would later learn is called a “wheel” (the first time I can recall was when I independently figured out a bubble sort algorithm). I’m sure a spreadsheet program can be coerced to massage the data in this manner, but I decided to go the Python route; I have to use Python to effectively extract the data from MySQL into a usable form anyway.

I created a Python script that takes a list of FATE configuration IDs and a test spec number. For each configuration ID, gather the history of millisecond run times for the specified test. Run a weighted moving average using the current run time and the 9 run times prior. The current run time is multiplied by 10, (n-1) is multiplied by 9, down through (n-9) which is multiplied by 1. Divide the average by 55 (10+9+…+1). For a more elegant mathematical explanation, see the Wikipedia entry. This script allows me to dump all the performance data for a series of configurations like, for example, all of the PowerPC configurations that run on the same machine.

Then, we turn the data over to OpenOffice Calc for graphing… <sigh…>


id RoQ video encode test, run on many successive revisions on PowerPC

Dear OpenOffice.org: I hate you so much. Sometimes you elect to plot data on a graph in a sane range and sometimes you opt to begin from 0. The latter is shown above, where the former would have been much more appropriate. As it stands, any useful data to be gathered from the visualizing the trend of the weighted moving averages is lost. And don’t even get me started (again) on your fragility. Or your atrocious software update mechanism.

Anyway, the above graph shows performance over time for the idroq-video-encode test run on various PowerPC configurations. The graph actually does reveal at least one oddity– the orange pulse wave that represents gcc 4.1.2’s performance over time. That might be worth looking into, particularly since it’s on the high part of the wave right now.

Below is the same thing, only with data collected from running h264-conformance-mv1_brcm_d which is the most computationally intensive H.264 conformance sample in the FATE suite. And it looks like I mislabeled ‘gcc 4.3.3’ as ‘gcc 4.3.2’ in both graphs. Oh well; not going through the pain of regenerating those graphs with OpenOffice now.


long H.264 conformance test, run on many successive revisions on PowerPC" title="long H.264 conformance test, run on many successive revisions on PowerPC

What to do about performance graphs going forward? I really don’t think it would be worthwhile to be able to pull graphs from the database relating to performance data over time for arbitrary tests. Most of the tests are simply too short to yield any useful trends. This would be better suited to the idea of running certain tasks less frequently. Various configurations should be made to run longevity tests on full movies encoded in various formats, once every day or 2. CPU runtime performance data (in contrast to wall clock time data) should be collected on those runs and graphed for analysis.

BTW, if anyone knows of better desktop graphing apps, do let me know. Any system, I guess. As long as it can read a CSV file and create a competent graph in a reasonable amount of time on a 2 GHz CPU, I should be able to tolerate it.

Flip The Game

I decided to reverse the order of the machines on the main FATE page. That makes the Linux and Mac OS X machines float to the top. Sorry, BSD and Solaris people, but the Linux stuff just take precedence.

And speaking of Solaris, you will notice that we have a new configuration: Solaris 10 on Sparc, compiling with gcc, with more compiler configurations hopefully to come. Thanks to JeffD for contributing these results.

Back on the topic of the front page: Of course, I have every desire to update the entire web experience. But I’m still woefully inept at modern web development. Maybe I’m being too hard on myself. Perhaps it’s better to claim that I have so many higher priority problems to solve for FATE. I have an entire other rant in process for my experience with trying to understand modern web programming.

Look, I have all this raw data in a neat format. What is a good, quick, cross-browser method to display it in a friendly manner so that it can be easily sorted by various criteria. In different GUI APIs, I’m pretty sure that I would coerce the data into some kind of DataGrid object. There’s nothing quite like that in plain HTML. Javascript, perhaps? What’s out there? Where do I even start looking?