Over the years, I have occasionally heard about a program called netcat, but it was only recently that I realized what it was– an all-purpose network tool. I can’t count all the times that this tool would have helped me in the past 10 years, nor the number of times I entertained the idea of basically writing the same thing myself.
I have been considering the idea of a lightweight FATE testing tool. The idea would be a custom system that could transfer a cross-compiled executable binary to a target and subsequently execute specific command line tests while receiving the stdout/stderr text back via separate channels. SSH fits these requirements quite nicely today but I am wondering about targeting platforms that don’t have SSH (though the scheme does require TCP/IP networking, even if it is via SLIP/PPP). I started to think that netcat might fit the bill. Per my reading, none of the program variations that I could find are capable of splitting stdout/stderr into separate channels, which is sort of a requirement for proper FATE functionality.
RSH also seems like it would be a contender. But good luck finding the source code for the relevant client and server in this day and age. RSH has been thoroughly disowned in favor of SSH. For good reason, but still. I was able to find a single rshd.c source file in an Apple open source repository. But it failed to compile on Linux.
So I started thinking about how to write my own lightweight testing client/server mechanism. The server would operate as follows:
- Listen for a TCP connection of a predefined port.
- When a client connects, send a random number as a challenge. The client must perform an HMAC using this random number along with a shared key. This is the level of security that this protocol requires– just prevent unauthorized clients from connecting. The data on the wire need not be kept secret.
- The client can send a command indicating that it will be transferring binary data through the connection. This will be used to transfer new FFmpeg binaries and libraries to the target machine for testing.
- The client can send a command with a full command line that the server will execute locally. As the server is executing the command, it will send back stdout and stderr data via 2 separate channels since that’s a requirement for FATE’s correct functionality.
- The client will disconnect gracefully from the server.
The client should be able to launch each of the tests using a single TCP connection. I surmise that this will be faster than SSH in environments where SSH is an option. As always, profiling will be necessary. Further, the client will have built-in support for such commands as {MD5} and some of the others I have planned. This will obviate the need to transfer possibly large amounts of data over a conceivably bandwidth-limited device. This assumes that performing such tasks as MD5 computation do not outweigh the time it would take to simply transfer the data back to a faster machine.
I can’t possibly be the first person to want to do this (in fact, I know I’m not– John K. also wants something like this). So is there anything out there that already solves my problems? I know of the Test Anything Protocol but I really don’t think it goes as far as I have outlined above.
Huh? What is the problem with finding RSH? Both Debian and Gentoo still have even official packages for rsh client and server (and source is e.g. here: ftp://ftp.uk.linux.org/pub/linux/Networking/netkit/).
But I really can’t see what you would need for it.
Even the cheapest routers usually can support SSH, via a program called “dropbear”, which works even with uClibc.
If speed is an issue, ssh allows you to choose some very fast (though probably not really secure) ciphers like “arcfour”.
IIRC with that one you could fill up a 100 MBit connection even with an original Pentium 90 MHz.
I just tried RSH on a Fedora system… it’s there. That’s strange; I thought that various systems were symlinking RSH -> SSH these days (Ubuntu, maybe?).
Sure, even a slow computer can saturate a 100 MBit connection. However, I am still interested it running FATE remotely on a machine connected to the network with 802.11b wireless. That achieves about 500 kbytes/sec. If a test requires decoding and producing 10 megabytes of PCM audio, that’s an extra 5 seconds for network transfer. Another possible platform (though this is a long shot) is one that is only connected via a conventional RS-232 serial line running SLIP– 230 kbits/sec.
So I’m still interested in a lighter weight solution (at least in terms of network bandwidth). Now that I know RSH is still out there somewhere, I might look into that, or the Dropbear that you proposed. As a compromise, perhaps I can use RSH/SSH in conjunction with a small utility on the remote machine that interprets and executes test specs and, e.g., computes MD5 of the stdout before sending it back, vs. sending back the 10 megabytes of data from the example.
I just can’t figure out why your arguments should work out in favour of RSH over SSH. On a slow connection the encryption should be even less of an issue, whereas SSH also offers compression.
So in the situation you describe I’d say SSH would be a far better choice than RSH (though both might suck).
What kind of platforms do you want to support for which there is no SSH? I know that OpenSSH supports a lot of platforms, so its code should already be pretty portable, and I don’t expect it to use many inherently platform-specific operations. Adding support for another platform will likely be less work than implementing some of your other ideas.
Some other thoughts:
– you could extend an FTP server with a SITE command to issue your commands.
– socat ( http://www.dest-unreach.org/socat/ ) is netcat on steroids.
– Bash supports redirection to /dev/tcp/host/port to directly communicate over TCP (if this is enabled at compile-time).
If socat doesn’t split stdout/stderr into different channels, that’s an immediate dealbreaker.
The kind of machine I have in mind is this thing– a very low-spec MIPS netbook. I have a tough enough time getting a monolithic program compiled for this architecture. I’m frightened of the prospect of getting full-blown SSH or even RSH compiled for the thing.
Instead of compiling ssh yourself, have you tried if you can e.g. boot Debian (mipsel variant) from USB stick on that device?
Alternatively there might be a way to install debian in a chroot, giving you all the programs you need, too.
I’m not sure I’d fully recommend it, but these instructions should give you some ideas what you could do:
http://projects.kwaak.net/twiki/bin/view/Epc700/InstallDebianHowTo
I don’t know if the speed of this thing is acceptable for it, but the easiest way to compile FFmpeg would of course be to install gcc and compile it natively…
Remote cross-compiling would be still possible, however.
Here are some SSH performance stats from an AMD K6-2 500 MHz with 128 MB RAM running Ubuntu 8.04 on a 100 Mbit link:
3.4 MB/s – aes128-ctr cipher (default), hmac-md5 MAC (default)
3.4 MB/s – aes128-ctr cipher, umac-64 MAC
4.4 MB/s – arcfour cipher, hmac-md5 MAC
4.4 MB/s – arcfour cipher, umac-64 MAC
5.0 MB/s – no cipher, hmac-md5 MAC
5.4 MB/s – no cipher, umac-64 MAC
The last two use the “cipher-none” HPN-SSH patch to switch SSH to no encryption following the authentication step. Running “top” during transfer shows that the CPU is very busy, indicating the transfers are CPU-bound.
Doing a straight netcat transfer, I can get about 11.1 MB/s so there is definitely room for improvement. Unfortunately, the OpenSSH people have completely removed the “none” MAC (message authentication code) option and HPN-SSH doesn’t have that option so it is difficult to test without a MAC.
For performance stats using other ciphers and MAC algorithms, see https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/54180/comments/6 .
I tried stfufs ( http://www.guru-group.fi/too/sw/stfufs/ ), a FUSE filesystem over TCP/IP, but I could only get 2.7 MB/s even though the CPUs on both ends were mostly idle.
So on fast network connections with slow CPUs (like the one in your MIPS netbook), CPU will be the bottleneck if you use SSH to transfer data unless you find some way to turn of the MAC. I think Reimar’s claim that you can fill a 100 Mbit pipe with a Pentium 90 using SSH is pretty optimistic assuming you’re using a stock OpenSSH binary.
@Denver Gingerich:
Yeah, on thinking about it, it must have been either a P90 with 10 MBit or an Athlon 800 with 100 MBit.
Also due to experiences with Windows/SMB networking I probably considered something around 7 MB/s as “filling” a 100 MBit connection.
Still with compiler output I am not convinced that RSH would win in speed in a significant way against SSH+compression (compression meaning less to encrypt, less to authenticate, less to transfer – with the checksums at the networking layers).
Though just testing it is probably the surest way to know.
Forget all the arcane stuff and just use SSH. If you’re worried about bloated monoliths, grab dropbear and use it. It’s fast and light and the only reason I would not recommend throwing out OpenSSH and picking up dropbear on ALL systems is the lack (last time I checked) of a proper privilege separation layer in the server.