Author Archives: Multimedia Mike

The Parallelized Elephants In The Room

I think it’s time to face up to the fact that this whole parallelization fad is probably not going to go away. There was a recent thread of ffmpeg-devel regarding the possibility of ‘porting’ FFmpeg to something called the Nvidia Tesla. This discussion rekindled a dormant interest I have regarding what optimization possibilities could be in store for the Cell processor on board the Sony PlayStation 3, and whether there should be effort directed toward making FFmpeg capable of using such features.

SPE Element SPE Element
SPE Element PPE Element PPE Element SPE Element
SPE Element SPE Element

I finally took some time to read through many of the basic and advanced tutorials on offer and finally have a feel for what the system is set up to do. Unfortunately, it’s not always clear what these parallel architectures are capable of, a situation only exacerbated by vague, impenetrable marketing materials. Too many people confuse the Cell architecture with a homogenous multiprocessor environment, as is common today, which is simply not the case. In order to take advantage of the machine’s full power, an app has to be written with a special awareness of the fact that the Cell has a primary core (PPE) and 6 little helper coprocessors (SPEs), as is half-heartedly illustrated above. The PPE is a dual-threaded general-purpose CPU (64-bit PowerPC) and can do anything. Meanwhile, each SPE is essentially another 64-bit PPC that has its own pool of 256 kilobytes of memory (LS) and a special memory controller (MFC) that coordinates contact with the outside world. To take advantage of the SPEs, the PPE has to load programs into their memory space and tell them to execute the code. The Cell also features DMA facilities to efficiently shuttle data between main memory and the SPEs’ local memory, and there are mailbox facilities and interrupts to facilitate communication between the PPE and the SPEs.

I don’t know about a general parallelized architecture for FFmpeg that would take advantage of multiple architectures like Cell and Tesla (because I still can’t figure out how Tesla is supposed to work). However, in a media playback application, it might be possible to assign one SPE the task of decoding perceptual audio. Another SPE might be performing inverse transform operations for a video codec, while another SPE does postprocessing and yet another handles YUV -> RGB conversion. On the opposite end, it seems reasonable that SPEs could be put to work at tasks like motion estimation for video encoding.

Would this qualify as a Google Summer of Code project for FFmpeg? There is precedent for this– see “Development assistant for the ‘Ghost’ audio codec” which was essentially a lab rat for Monty’s (of Vorbis fame) newer audio coding ideas. Fortunately, a prospective student would not require a PS3 for this project; just a Linux machine. For it seems that IBM has a freely downloadable tool called the Cell Simulation Environment. I’m still working on getting the program running (it’s distributed as an RPM and is most happy on a Red Hat system).

I am a little surprised that there is not a PS3 Media Center project, in the spirit of the Xbox Media Center, at least not that I have been able to locate via web searches. I have been pondering the technical plausibility of such an endeavor. It almost seems as though the PS3 gives the guest OS just enough of a confined playground environment that it can’t possibly blossom into a reasonably high-end enviroment. While real-time video playback must be possible, is it possible to run at, say, full 1080p resolution at 30 fps? With all of the processing power, I trust that the Cell can handle any kind of video decoding, though I heard an unsubstantiated rumor once that it takes the PPE and 4 SPEs to decode HD H.264 video from a Blu-Ray disc. The PS3’s native HD player would have a slight advantage since it would presumably use the video hardware’s full feature set, which likely allows the PS3 to pass through raw 12-bit YUV data to be handled by the video hardware, in one way or another. In Linux under the hypervisor, you basically get to play with a big RGB frame buffer. That means that not only to you have to convert YUV -> RGB, but you also have to shuffle 2.5x as much raw video data to the video memory for each frame. That works out to upwards of 250 MB of data shuffling each second ((1920 * 1080 pixels/frame) * (4 bytes/pixel) * (30 frames/second)). I have read conflicting sources about whether it’s possible for Linux under the PS3 hypervisor to DMA data from main RAM to system RAM. Some sources contend that there is work ongoing while other sources claim that this feature was fixed in later firmware revisions (i.e., no longer possible).

One possible dealbreaker in the proposal to use the PS3’s guest OS mode to install Linux and a general purpose media player is that, from everything I have read, the hypervisor only allows the guest OS to output stereo audio. This might be a long shot, but perhaps it would be possible to transcode super-stereo (more than 2 channels) audio to Dolby Pro Logic II to be sent out to a capable decoder module. Hey, it’s sort of like true surround sound.

If you are interested in the hard technical details of running Linux on a PlayStation 3 and programming its Cell Processor, this directory at kernel.org seems to be fairly authoritative on the matter. The latest iteration of the tech documents (dated 2008-02-01) are here.

Belief In The Compiler

If you keep up with FATE as obsessively as I do, you may have noticed that I got Intel’s C compiler (icc) into the build farm. It was a struggle, but I finally made it happen. The compiler is distributed as an RPM. but the x86_32 build machine is Ubuntu. I googled and found a number of blog posts describing how to install it on Ubuntu. I went the route of using the alien program to convert the RPM to a DEB, installing it, manually modifying the ‘icc’ shell script to point to the correct INSTALLDIR and updating the ld configuration to point to the right libraries. Finally, I installed the free-for-non-commerial-use license file in one of the many acceptable locations and I was off.

When I first started fighting with icc about a month ago, the compiler was on version 10.1.008. It is now on 10.1.012, indicating that I may need to update it almost as frequently as the the SVN version of gcc used on the farm.

I also tried to get the x86_64 version of the compiler running on the appropriate build machine. When I try to run ‘icc’, I get one of the most annoying and confusing UNIX errors known to exist:

  -bash: /opt/intel/cce/10.1.012/bin/iccbin: No such file or directory

Even though I can see that file quite clearly exists.

But at least icc is running for x86_32, and FFmpeg is compiling fine and running the same series of tests as all the gcc versions. Personally, I have never put a lot of stock in the optimizing prowess of proprietary compilers. I have seen a few too many that need to have their optimizers disabled because they are so obviously buggy. However, icc demonstrates some clear speedups over gcc based on FATE testing. If you open a build record page for an icc run in one window or tab, and then open a build record page for a gcc run in another, you can see that the icc-built binary generally runs faster. This is particularly notable on longer tests.

This exercise also reminds me that the SVN versions of gcc build slow binaries, at least on x86_32. I am wondering if this has to do with the way I am building the compiler, or if 4.3 will actually build substantially slower binaries?

And yes, I plan one day to deploy an easier way of comparing build performance over time, and for various platforms.

Never Fast Enough

Today’s post over on Coding Horror is called There Ain’t No Such Thing as the Fastest Code. The post discusses the idea that no matter how much you manage to hyper-optimize your code, even down to the assembly level, it could always be faster. Strange that this topic should arise since there is currently a discussion over on ffmpeg-devel regarding a faster C-based fast Fourier transform (FFT).


Basic FFT graph

I guess I had just been taking it for granted that the matter of a fast C-based FFT was a closed issue. How wrong I was. It seems that there is an algorithm answering to the name of djbfft that offers some marked speedups over FFmpeg’s current C-based FFT.

The ensuing discussion for the Coding Horror article is a typical debate on optimization trade-offs (e.g., time investment vs. resulting speed-up). However, while the common argument is that computing hardware is so ridiculously cheap, powerful, and abundant that there is no reason to waste precious time on optimization, there is also the ironic trend back to less capable machines. Like my Asus Eee PC. Trust me, I am suddenly keenly aware of modern software bloat.

My favorite comment comes from no-fun:

…the more people which claim optimisation is worthless, well, that just means that I can charge more since my particular expertise is almost impossible to find. And I’ll take the big $$$, thank you very much. Yeah, I agree with y’all, optimisation is crap. Dont bother learning it.

We multimedia hackers tend to be quite secure in our reasoning for optimization. After all, I challenge anyone to decode 1080p H.264 video in realtime using pure Java or C# code (no platform-specific ASM allowed).

FATE’s Ugly History

Tonight, I finally implemented and deployed a build history browsing option for the FATE Server. Yeah, FATE just got a little uglier and crowded tonight in the process. But if you visit the main page and see that one of the build boxes is red instead of its recommended green, you can browse the corresponding history to learn approximately which SVN revision broke the build. I say “approximately” because the commits can get stacked, e.g., a build/test cycle is occurring and 3 new changes are committed before the machine has a chance to check out code again and build. In that case, the history browser will at least provide links to each of the 3 commits in the online SVN browser which allows a developer to inspect each change message that went into a build.

I hope to implement a similar history browser for test results so that a developer can analyze which SVN commit broke a particular test.

With these history features, I think I am finally finishing with the most basic useful features. Too bad the website looks so bad with its retro-1995 feel. Unfortunately, I have roughly zero experience in making websites pretty. What I care more about is that the website is functional for the nominal FFmpeg developer. I’m disappointed with how crowded the main page is becoming. However, I have had plans from the beginning about how to distill that data down to a few essential pieces of information. Plus, I have ideas about how to make the front page orders of magnitude faster to load (hello, caching strategy). Though I might be the only person who really cares as I obsessively check the FATE status throughout the day.