Never Fast Enough

Today’s post over on Coding Horror is called There Ain’t No Such Thing as the Fastest Code. The post discusses the idea that no matter how much you manage to hyper-optimize your code, even down to the assembly level, it could always be faster. Strange that this topic should arise since there is currently a discussion over on ffmpeg-devel regarding a faster C-based fast Fourier transform (FFT).


Basic FFT graph

I guess I had just been taking it for granted that the matter of a fast C-based FFT was a closed issue. How wrong I was. It seems that there is an algorithm answering to the name of djbfft that offers some marked speedups over FFmpeg’s current C-based FFT.

The ensuing discussion for the Coding Horror article is a typical debate on optimization trade-offs (e.g., time investment vs. resulting speed-up). However, while the common argument is that computing hardware is so ridiculously cheap, powerful, and abundant that there is no reason to waste precious time on optimization, there is also the ironic trend back to less capable machines. Like my Asus Eee PC. Trust me, I am suddenly keenly aware of modern software bloat.

My favorite comment comes from no-fun:

…the more people which claim optimisation is worthless, well, that just means that I can charge more since my particular expertise is almost impossible to find. And I’ll take the big $$$, thank you very much. Yeah, I agree with y’all, optimisation is crap. Dont bother learning it.

We multimedia hackers tend to be quite secure in our reasoning for optimization. After all, I challenge anyone to decode 1080p H.264 video in realtime using pure Java or C# code (no platform-specific ASM allowed).