Monthly Archives: February 2008

Git Chat

Actual IM conversation:

[me]: ever use git?
[them]: Why would I do such a thing?
[me]: peer pressure, because all of your co-devs 
      told you it was the cool thing to do

I thought that developing the software that drives FATE would serve as a good opportunity to learn the Git source control software. Foolish.

Git terrifies me. Thing is, I make mistakes. Lots of mistakes. I need a source control management system (SCM) that is sympathetic to my incompetence. As it stands, when I make a mistake, I have to dig through 140 git-* commands on my system to try to guess which one just might offer a shimmering hope of redemption. If I choose poorly, I will only exacerbate the situation as well as pollute the official history log. Such was the case when I tried to revert one particular commit. I can’t remember how that worked out exactly. I guess I got the correct code back eventually, but the log file tells a sordid tale.

More recently, I edited a file but decided I didn’t want the changes; I wanted the previous committed version back. Perhaps use git-revert, like most other SCMs? Goodness, no. Maybe git-reset? Guess again. Turns out git-checkout is what I was looking for (thanks, Mans). Now, I have made the mistake of using git-commit in such a way that actually committed more files than I thought it would (serves me right for following examples and not reading the pedantic documentation first). Now I find myself wanting to undo the commit for one particular file but not actually lose the changes.

Here’s a solution that can’t fail: ‘rm -rf .git/’, followed by a re-reading of how to initialize a local Subversion repository. And whose idea was it to tag revisions with random 160-bit hex codes like 488dfe6a946bbbbb4e095a5d758ad9808f7336b1? (Yeah, I know, they’re SHA-1 codes or some such. I don’t care; it’s still not human-friendly). I hope FFmpeg never gets around to making the switch.

Prompt FATE

If you have visited the FATE server, the first thing you will notice is how abysmally slow the main page is. That’s because of the absurd amount of queries required to put together that concise summary page. It’s hard to get people to take the system seriously when the front page takes 30-60 seconds to load. Look, what can I tell you? I’m a multimedia hacker, not a DBA.

But it all changed tonight. That’s right– it’s all about caching! Give the FATE Server another look. Give it many looks, in fact, because the main page will load immediately. Be advised that the cache is only updated every 15 minutes, so the trade-off is not quite as much instant gratification.

The Parallelized Elephants In The Room

I think it’s time to face up to the fact that this whole parallelization fad is probably not going to go away. There was a recent thread of ffmpeg-devel regarding the possibility of ‘porting’ FFmpeg to something called the Nvidia Tesla. This discussion rekindled a dormant interest I have regarding what optimization possibilities could be in store for the Cell processor on board the Sony PlayStation 3, and whether there should be effort directed toward making FFmpeg capable of using such features.

SPE Element SPE Element
SPE Element PPE Element PPE Element SPE Element
SPE Element SPE Element

I finally took some time to read through many of the basic and advanced tutorials on offer and finally have a feel for what the system is set up to do. Unfortunately, it’s not always clear what these parallel architectures are capable of, a situation only exacerbated by vague, impenetrable marketing materials. Too many people confuse the Cell architecture with a homogenous multiprocessor environment, as is common today, which is simply not the case. In order to take advantage of the machine’s full power, an app has to be written with a special awareness of the fact that the Cell has a primary core (PPE) and 6 little helper coprocessors (SPEs), as is half-heartedly illustrated above. The PPE is a dual-threaded general-purpose CPU (64-bit PowerPC) and can do anything. Meanwhile, each SPE is essentially another 64-bit PPC that has its own pool of 256 kilobytes of memory (LS) and a special memory controller (MFC) that coordinates contact with the outside world. To take advantage of the SPEs, the PPE has to load programs into their memory space and tell them to execute the code. The Cell also features DMA facilities to efficiently shuttle data between main memory and the SPEs’ local memory, and there are mailbox facilities and interrupts to facilitate communication between the PPE and the SPEs.

I don’t know about a general parallelized architecture for FFmpeg that would take advantage of multiple architectures like Cell and Tesla (because I still can’t figure out how Tesla is supposed to work). However, in a media playback application, it might be possible to assign one SPE the task of decoding perceptual audio. Another SPE might be performing inverse transform operations for a video codec, while another SPE does postprocessing and yet another handles YUV -> RGB conversion. On the opposite end, it seems reasonable that SPEs could be put to work at tasks like motion estimation for video encoding.

Would this qualify as a Google Summer of Code project for FFmpeg? There is precedent for this– see “Development assistant for the ‘Ghost’ audio codec” which was essentially a lab rat for Monty’s (of Vorbis fame) newer audio coding ideas. Fortunately, a prospective student would not require a PS3 for this project; just a Linux machine. For it seems that IBM has a freely downloadable tool called the Cell Simulation Environment. I’m still working on getting the program running (it’s distributed as an RPM and is most happy on a Red Hat system).

I am a little surprised that there is not a PS3 Media Center project, in the spirit of the Xbox Media Center, at least not that I have been able to locate via web searches. I have been pondering the technical plausibility of such an endeavor. It almost seems as though the PS3 gives the guest OS just enough of a confined playground environment that it can’t possibly blossom into a reasonably high-end enviroment. While real-time video playback must be possible, is it possible to run at, say, full 1080p resolution at 30 fps? With all of the processing power, I trust that the Cell can handle any kind of video decoding, though I heard an unsubstantiated rumor once that it takes the PPE and 4 SPEs to decode HD H.264 video from a Blu-Ray disc. The PS3’s native HD player would have a slight advantage since it would presumably use the video hardware’s full feature set, which likely allows the PS3 to pass through raw 12-bit YUV data to be handled by the video hardware, in one way or another. In Linux under the hypervisor, you basically get to play with a big RGB frame buffer. That means that not only to you have to convert YUV -> RGB, but you also have to shuffle 2.5x as much raw video data to the video memory for each frame. That works out to upwards of 250 MB of data shuffling each second ((1920 * 1080 pixels/frame) * (4 bytes/pixel) * (30 frames/second)). I have read conflicting sources about whether it’s possible for Linux under the PS3 hypervisor to DMA data from main RAM to system RAM. Some sources contend that there is work ongoing while other sources claim that this feature was fixed in later firmware revisions (i.e., no longer possible).

One possible dealbreaker in the proposal to use the PS3’s guest OS mode to install Linux and a general purpose media player is that, from everything I have read, the hypervisor only allows the guest OS to output stereo audio. This might be a long shot, but perhaps it would be possible to transcode super-stereo (more than 2 channels) audio to Dolby Pro Logic II to be sent out to a capable decoder module. Hey, it’s sort of like true surround sound.

If you are interested in the hard technical details of running Linux on a PlayStation 3 and programming its Cell Processor, this directory at kernel.org seems to be fairly authoritative on the matter. The latest iteration of the tech documents (dated 2008-02-01) are here.

Belief In The Compiler

If you keep up with FATE as obsessively as I do, you may have noticed that I got Intel’s C compiler (icc) into the build farm. It was a struggle, but I finally made it happen. The compiler is distributed as an RPM. but the x86_32 build machine is Ubuntu. I googled and found a number of blog posts describing how to install it on Ubuntu. I went the route of using the alien program to convert the RPM to a DEB, installing it, manually modifying the ‘icc’ shell script to point to the correct INSTALLDIR and updating the ld configuration to point to the right libraries. Finally, I installed the free-for-non-commerial-use license file in one of the many acceptable locations and I was off.

When I first started fighting with icc about a month ago, the compiler was on version 10.1.008. It is now on 10.1.012, indicating that I may need to update it almost as frequently as the the SVN version of gcc used on the farm.

I also tried to get the x86_64 version of the compiler running on the appropriate build machine. When I try to run ‘icc’, I get one of the most annoying and confusing UNIX errors known to exist:

  -bash: /opt/intel/cce/10.1.012/bin/iccbin: No such file or directory

Even though I can see that file quite clearly exists.

But at least icc is running for x86_32, and FFmpeg is compiling fine and running the same series of tests as all the gcc versions. Personally, I have never put a lot of stock in the optimizing prowess of proprietary compilers. I have seen a few too many that need to have their optimizers disabled because they are so obviously buggy. However, icc demonstrates some clear speedups over gcc based on FATE testing. If you open a build record page for an icc run in one window or tab, and then open a build record page for a gcc run in another, you can see that the icc-built binary generally runs faster. This is particularly notable on longer tests.

This exercise also reminds me that the SVN versions of gcc build slow binaries, at least on x86_32. I am wondering if this has to do with the way I am building the compiler, or if 4.3 will actually build substantially slower binaries?

And yes, I plan one day to deploy an easier way of comparing build performance over time, and for various platforms.