Author Archives: Multimedia Mike

Practical Cloud

Who in their right mind would ever want to store their working documents somewhere, out there, “in the cloud”, i.e., on someone else’s servers? I openly wondered this a few weeks ago and have wondered about it ever since the idea was first proposed many years ago.

It turns out that the answer is… me.

Here’s how it happened: I contribute to a video game database named MobyGames. A long time ago, I started creating a series of plain ASCII files to help me track which games aren’t in the database yet. Other people wanted to submit new lists and help me maintain the existing lists. For the last 6 months, I have been occasionally brainstorming and researching how to create a very simple, database-backed, collaborative web application.

Yesterday, I thought of a better solution: A Google spreadsheet. My, that was easy. It pretty much does everything I was hoping my collaborative web app would do and it required zero coding on my part.

People often suggested that I set up a wiki in order to manage this type of data. I generally consider a wiki to be the poor man’s content management system (CMS) — little more than a giant, distributed, collaborative whiteboard (ironically, before I set up the MultimediaWiki on top of MediaWiki, I had again spent a long time brainstorming my own custom database-backed web app for the same purpose). I wanted a little more structure imposed on this data which is exactly what the spreadsheet can provide. A proper database would be even better but I’m willing to compromise for the sake of just having something useful with minimal effort on my part.

Still, I was hoping that writing a simple web app in some kind of existing, open source framework would be a great exercise for making a more complex web app out of FATE. My occasional study of web frameworks during the past 6 months has taught me that that’s something I genuinely don’t wish to mess with.

Lightweight FATE Testing

Over the years, I have occasionally heard about a program called netcat, but it was only recently that I realized what it was– an all-purpose network tool. I can’t count all the times that this tool would have helped me in the past 10 years, nor the number of times I entertained the idea of basically writing the same thing myself.

I have been considering the idea of a lightweight FATE testing tool. The idea would be a custom system that could transfer a cross-compiled executable binary to a target and subsequently execute specific command line tests while receiving the stdout/stderr text back via separate channels. SSH fits these requirements quite nicely today but I am wondering about targeting platforms that don’t have SSH (though the scheme does require TCP/IP networking, even if it is via SLIP/PPP). I started to think that netcat might fit the bill. Per my reading, none of the program variations that I could find are capable of splitting stdout/stderr into separate channels, which is sort of a requirement for proper FATE functionality.

RSH also seems like it would be a contender. But good luck finding the source code for the relevant client and server in this day and age. RSH has been thoroughly disowned in favor of SSH. For good reason, but still. I was able to find a single rshd.c source file in an Apple open source repository. But it failed to compile on Linux.

So I started thinking about how to write my own lightweight testing client/server mechanism. The server would operate as follows:

  • Listen for a TCP connection of a predefined port.
  • When a client connects, send a random number as a challenge. The client must perform an HMAC using this random number along with a shared key. This is the level of security that this protocol requires– just prevent unauthorized clients from connecting. The data on the wire need not be kept secret.
  • The client can send a command indicating that it will be transferring binary data through the connection. This will be used to transfer new FFmpeg binaries and libraries to the target machine for testing.
  • The client can send a command with a full command line that the server will execute locally. As the server is executing the command, it will send back stdout and stderr data via 2 separate channels since that’s a requirement for FATE’s correct functionality.
  • The client will disconnect gracefully from the server.

The client should be able to launch each of the tests using a single TCP connection. I surmise that this will be faster than SSH in environments where SSH is an option. As always, profiling will be necessary. Further, the client will have built-in support for such commands as {MD5} and some of the others I have planned. This will obviate the need to transfer possibly large amounts of data over a conceivably bandwidth-limited device. This assumes that performing such tasks as MD5 computation do not outweigh the time it would take to simply transfer the data back to a faster machine.

I can’t possibly be the first person to want to do this (in fact, I know I’m not– John K. also wants something like this). So is there anything out there that already solves my problems? I know of the Test Anything Protocol but I really don’t think it goes as far as I have outlined above.

RAM Disk Experiment

Science project: Can FATE performance be improved — significantly or at all — by running as much of the operation as possible from RAM? My hypothesis is that it will speed up the overall build/test process, but I don’t know by how much.

Conclusion and spoiler: The RAM disk makes no appreciable performance difference. Linux’s default caching is more than adequate.

There are 4 items I am looking at storing in RAM: The FFmpeg source code, the built objects, the ccache files, and the suite of FATE samples. This experiment will deal with placing the first 3 into RAM.

Method:

  • Clear ccache and compile FFmpeg on the disk. Do this thrice and collect “wall clock” numbers using the ‘time’ command line prefix,
    e.g.:

      time `../ffmpeg/configure --prefix=install-directory -cc="ccache gcc" &&
            make && make install`
    

    The second and third runs should be faster due to Linux’s usual file caching in memory.

  • Restart the machine.
  • Perform 3 more runs using the existing cache.
  • Restart the machine.
  • Set up a 1GB RAM disk as outlined by this tutorial.
  • Copy the source tree into the RAM disk and configure ccache to use a directory on the RAM disk. Re-run the last step and collect numbers.
  • Bonus: restart the machine again and compile the source without ccache in order to measure the performance hit incurred by ccache when there are no files cached.

Hardware: MSI Wind Nettop with 1.6 GHz N330 Atom (dual-core, hyperthreaded); 2 GB of DDR2 533 RAM; 160 GB, 7200 RPM SATA HD with an ext3 filesystem. I don’t know a good way to graph this, so here are the raw numbers. The first number of each pair is wall clock time, the second is CPU time.

On disk:
run 1: 15:41, 14:32
run 2:  1:43,  1:12
run 3:  1:43,  1:12

On disk, after restart:
run 1:  1:50,  1:13
run 2:  1:42,  1:13
run 3:  1:43,  1:12

RAM disk (ext2):
run 1: 15:37, 14:35
run 2:  1:39,  1:12
run 3:  1:40,  1:13

From startup, no ccache:
run 1: 15:12, 14:12

Building from disk after a restart demonstrates that there is a difference of 8 real seconds during which all of the relevant files are read into the OS’s file cache. The run without ccache demonstrates that using ccache with no prior cache incurs a nearly 30-second penalty as the cache must be initialized.

And since I know you’re wondering, here’s what happens when I wipe the ccache and just let this thing rip with ‘make -j5’ multithreaded build:

On disk, with ccache, multithreaded:
run 1: 6:51, 24:12
run 2: 1:05, 2:18
run 3: 0:54, 1:41
run 4: 0:54, 1:40

I did 4 runs this time because I wanted to see if I saw a 4th set of numbers consistent with the 3rd.

I know these results may elicit a big “duh!” from many readers, but I still wanted to prove it to myself.

Newsweek Future Scans Time Capsule

The year was 1999, the month was May, and Newsweek (an American weekly news periodical) had a cover feature entitled “What You’ll Want Next”. The cover prominently featured a Sega Dreamcast controller (the console was slated for U.S. release a few months later). One of the features in the issue had an illustration of a future home and all the technological marvels that would arrive in coming years. I scanned the pictures and always wanted to write something about the prognostications contained within, some of which seemed a tad outlandish.


"What You'll Want Next" Newsweek cover (May 31, 1999)

May 31, 1999 Newsweek magazine: “What You’ll Want Next” (cover found at: Yale Library Digital Collections)


I never got around to it at then (plus, I had no good place to publish it). But look at the time– it’s 10 years later already! And I still have the page scans laying around, having survived moves to at least a 1/2 dozen “main desktop computers” over the intervening decade. So let’s have a look at where we were supposed to be by now.

Newsweek's FutureHouse, page 1 Newsweek's FutureHouse, page 2

Click for larger images

A Really Smart House

The home of the future will be loaded with appliances that talk to the Internet — and to each other. A high-speed Net connection links to set-top boxes and PCs; devices — from reading tablets to washing machine — are connected through a local wireless network. Though pervasive and powerful, the technology isn’t intrusive.

Continue reading