Category Archives: General

Lightweight FATE Testing

Over the years, I have occasionally heard about a program called netcat, but it was only recently that I realized what it was– an all-purpose network tool. I can’t count all the times that this tool would have helped me in the past 10 years, nor the number of times I entertained the idea of basically writing the same thing myself.

I have been considering the idea of a lightweight FATE testing tool. The idea would be a custom system that could transfer a cross-compiled executable binary to a target and subsequently execute specific command line tests while receiving the stdout/stderr text back via separate channels. SSH fits these requirements quite nicely today but I am wondering about targeting platforms that don’t have SSH (though the scheme does require TCP/IP networking, even if it is via SLIP/PPP). I started to think that netcat might fit the bill. Per my reading, none of the program variations that I could find are capable of splitting stdout/stderr into separate channels, which is sort of a requirement for proper FATE functionality.

RSH also seems like it would be a contender. But good luck finding the source code for the relevant client and server in this day and age. RSH has been thoroughly disowned in favor of SSH. For good reason, but still. I was able to find a single rshd.c source file in an Apple open source repository. But it failed to compile on Linux.

So I started thinking about how to write my own lightweight testing client/server mechanism. The server would operate as follows:

  • Listen for a TCP connection of a predefined port.
  • When a client connects, send a random number as a challenge. The client must perform an HMAC using this random number along with a shared key. This is the level of security that this protocol requires– just prevent unauthorized clients from connecting. The data on the wire need not be kept secret.
  • The client can send a command indicating that it will be transferring binary data through the connection. This will be used to transfer new FFmpeg binaries and libraries to the target machine for testing.
  • The client can send a command with a full command line that the server will execute locally. As the server is executing the command, it will send back stdout and stderr data via 2 separate channels since that’s a requirement for FATE’s correct functionality.
  • The client will disconnect gracefully from the server.

The client should be able to launch each of the tests using a single TCP connection. I surmise that this will be faster than SSH in environments where SSH is an option. As always, profiling will be necessary. Further, the client will have built-in support for such commands as {MD5} and some of the others I have planned. This will obviate the need to transfer possibly large amounts of data over a conceivably bandwidth-limited device. This assumes that performing such tasks as MD5 computation do not outweigh the time it would take to simply transfer the data back to a faster machine.

I can’t possibly be the first person to want to do this (in fact, I know I’m not– John K. also wants something like this). So is there anything out there that already solves my problems? I know of the Test Anything Protocol but I really don’t think it goes as far as I have outlined above.

Newsweek Future Scans Time Capsule

The year was 1999, the month was May, and Newsweek (an American weekly news periodical) had a cover feature entitled “What You’ll Want Next”. The cover prominently featured a Sega Dreamcast controller (the console was slated for U.S. release a few months later). One of the features in the issue had an illustration of a future home and all the technological marvels that would arrive in coming years. I scanned the pictures and always wanted to write something about the prognostications contained within, some of which seemed a tad outlandish.


"What You'll Want Next" Newsweek cover (May 31, 1999)

May 31, 1999 Newsweek magazine: “What You’ll Want Next” (cover found at: Yale Library Digital Collections)


I never got around to it at then (plus, I had no good place to publish it). But look at the time– it’s 10 years later already! And I still have the page scans laying around, having survived moves to at least a 1/2 dozen “main desktop computers” over the intervening decade. So let’s have a look at where we were supposed to be by now.

Newsweek's FutureHouse, page 1 Newsweek's FutureHouse, page 2

Click for larger images

A Really Smart House

The home of the future will be loaded with appliances that talk to the Internet — and to each other. A high-speed Net connection links to set-top boxes and PCs; devices — from reading tablets to washing machine — are connected through a local wireless network. Though pervasive and powerful, the technology isn’t intrusive.

Continue reading

Cloudy Outlook

I don’t get this whole cloud computing thing, and believe me, I have been trying to understand it. Traditionally, I have paid little attention to emerging technology fads; if a fad sticks, then I might take the time to care. But I’m being a little more proactive with this one.


Obligatory cloud art

From what I have been able to sort out, the idea is that your data (the important part of your everyday computer work), lives on some server “out there”, in the “cloud” that is the internet. Veteran internet geeks like myself don’t find this to be particularly revolutionary. This is the essence of IMAP, at the very least, a protocol whose RFC is over 2 decades old. Cloud computing seems to be about extending the same paradigm to lots of different kinds of work, presumably with office-type documents (word processing documents, spreadsheets, and databases) leading the pack.

How is this all supposed to work? Intuitively, I wonder about security and data ownership issues. I don’t think we’re supposed to ask such questions. Every description I can find of cloud computing does a lot of hand-waving and asserts that everything will “just work”.

I had one computer science professor in college lecture that said “a bad idea is still a bad idea no matter how much money you throw at it.” I don’t yet know if this is a bad idea. But it’s definitely a big buzzword. I have been reading that Ubuntu is launching some kind of cloud service and a distribution of Linux that is integrated with said service. One part of this (or perhaps both parts) is called “Cloud One”.


Sony Micro Vault Tiny vs. quarter

My personal version of the computing cloud is a microscopic yet ridiculously high density USB flash drive, something I have only recently discovered and grown accustomed to (I told you that I’m often behind the technological curve). I tend to bring it with me nearly everywhere now. When I analyze it in the context of the cloud, I worry about the security and redundancy matters. I.e., I should probably have an easy, periodic backup process in place at home. Also, I should use some kind of encrypted filesystem for good measure (EncFS over FUSE should fit the bill and operate over whatever filesystem is in place).

Benjamin Otte has recently posted the most cogent use case of (what might be) cloud computing. One aspect of his vision is that his desktop settings are the same no matter which computer he logs into. I can’t deny that that would be nice. I have long forgotten what it’s like to customize and personalize my desktop environment. This is because I work on so many different computers and virtualized sessions that it would simply be too much trouble to make the changes everywhere. I don’t see how my flash stick solution would be able to help in such a situation (though that’s not outside the realm of possibility). But I’m also not convinced that the cloud approach is the ideal solution either.

Then again, it’s not really up to me. I suspect it will be largely up to the marketers.

Followup:

Performance Smackdown: PowerPC

Someone asked me for performance numbers for the PowerPC, i.e., how efficiently can a PowerPC CPU decode certain types of multimedia via FFmpeg. So I ran my compiler benchmark script on the 5 compiler configurations currently in FATE. I did 2 runs, one with and one without AltiVec optimizations. I used the 512×224 MPEG-4 part 2 video with MP3 audio (104 minutes, ~144,000 frames). These tests were run on a 1.25 GHz PowerPC G4 (Mac Mini running Linux). The FFmpeg source code was at SVN revision 18711.


PowerPC performance comparison

Interesting stuff: The performance trends do not parallel the chaos we have seen with x86_32 and x86_64. Instead, we see continuous improvement.

Suggestions for improvement welcome, though there don’t seem to be a lot of tunable parameters for PowerPC in gcc.

See Also: