The x86_32 and x86_64 FATE build machines are actually VMware Fusion-hosted virtual machines running in Mac OS X (which, indeed, shall one day perform FATE duty as well). I decided to run with the flock and use Ubuntu for these VMs. However, even with this Mac Mini maxed out to its unofficial limit of 3 gigabytes of RAM, I couldn’t help thinking that they were wasting a lot of resources by running full GNOME desktops. Each VMware session has 1/2 GB of RAM allocated to it and either an 8 or 16 GB disk. Overkill, or so I suspected. I won’t bore you with the tales of the minimal hardware on which I first started running Linux because many of you were once there. I made it a goal to get each build machine down to using a 2 GB disk and hopefully 128 MB of RAM.
I briefly researched Linux distributions that are designed to be tiny, like Damn Small Linux. But I have grown reasonably comfortable with managing Ubuntu and would like to see what I can do to make it lean. At first, I thought I could allocate a small disk, install the desktop CD image, and then either configure minimal installation or remove lots of packages and configure for no-graphics startup after the install. It turns out that Ubuntu desktop does not have much in the way of customization options, save for timezone and keyboard layout. That’s the trade-off they present for being so mainstream and simple. But the CD also can not install if it is only given 2 GB of disk to work with. It just sort of hangs at one point.
There are 2 other install options. First is the “alternate” desktop ISO. The description here sounds promising but in practice, it tries the same desktop installation. The last option is the server installation. I am a little wary of Linux distros designated as “server” distros. I have been bitten at least once by a default server installation that was locked down way too tight, like a guard dog that won’t let me near my own junkyard.
But I give it a whirl anyway and the server version of Ubuntu turns out to be exactly what I was looking for– no graphics, no office or multimedia applications, just a minimal install that only occupies 1/3 of the 2 GB disk.
Next, I check if I can get away with allocating a meager 128 MB or RAM to each system. How to determine absolute memory usage in Linux, what with buffers and caches, etc.? ‘top’ will generally report all memory is being used anyway. Fortunately, I read about a tool called ‘free’, run in a 3-second polling mode:
$ free -o -s 3 total used free shared buffers cached Mem: 253080 212020 41060 0 36836 101876 Swap: 0 0 0
Actually, I just noticed that ‘top’ also reports the same buffer and cache statistics. Anyway, I had no trouble compiling FFmpeg on x86_32 with only 128 MB of RAM. I tried the same on x86_64. When the build process got to libavcodec/motion_est.c (oh yes, you know the monster of which I speak), I watched the buffers and cached columns gradually shrink while the used column steadily rose. Eventually, the operating system decided that the process was too fat to live. The upshot is that the x86_64 VM gets 256 MB of RAM. How about that? 64-bit programs really do need twice as much memory.
There’s still the issue of disk usage. 2 GB doesn’t stretch as far as it used to. Fortunately, the FATE test suite lives on a different machine. But I need to store many different compilers and hopefully the most recent ccache output for each of the compilers. Big space management strategy here: Configure FFmpeg with ‘–disable-debug’.
You could just have used a swap partition or even swap file for the 64 bit version (though the 2 GB disk will be really tight then I guess). Btw. motion_est.c compilation for HP-UX with gcc 2.95 usually will not work with just 128 MB RAM, and that is not 64 bit ;-)
It seems that gcc 2.95.3 can’t build mpegvideo_enc.c on x86_32 with only 128 MB of RAM. I’m upping that one to 192 MB. I’m stingy, even if I have 3 GB total in this system, more than I’ve ever had in one computer before.
Ha, one more “64 bit needs much more RAM” made less convincing ;-).
And I have 3GB RAM in my PC too. One of the side effects of doing hardware development – 3 GB can be rather measly for that :-( (everything else gets hardly 300 MB filled up – so I am now doing a lot of compilation in a RAM-disk (tmpfs more precisely) to at least have some use for the RAM).
Btw. it is a really weird situation when you have more free RAM than disk space in /home …
And the memory requirements keep increasing. I will have to bump the RAM again after the x86_32 VM failed to build the latest SVN of gcc (4.4) with a piddly 192 MB of RAM:
cc1: out of memory allocating 5338824 bytes after a total of 60530688 bytes
Certainly, I could have made provisions for swap space when I established these VMs. But I admit, I find this a fascinating exercise, determining what the actual minimum is for various tasks these days.
Why, I first ran Linux just fine on a machine with only 32 MB of RAM! :-) Yeah, I started sort of late, only in 1998.
Eh, for the record, the x86_64 VM choked in the same place when building gcc-svn. It will need more than 256 MB in order to carry this out. The offending file in both cases is insn-attrtab.c.
Why not running only one VM with a 64 bits kernel that allow to run 32 and 64 bit application.
Because I didn’t want to deal with the hassle of building pure 32-bit stuff in a 64-bit environment. Plain and simple.