I dug up this old security alert. It’s very dear to me in that I’m directly responsible for the security problem outlined. Whenever I feel like my work doesn’t matter, I just have to remind myself that I have written code that has become widespread enough that it warrants security notices. Many programmers likely go their whole career without making that kind of impact. (That kind of positive spin might be similar to not knowing or caring about the difference between positive and negative attention.)
For the curious, I wrote an AIFF demuxer (among many others) for the xine project. For some reason, I allocated a static buffer of 100 bytes on the stack and proceeded to read a number of bytes from user input, a number that was also determined by the same user input. Big no-no, and I really don’t know what I was thinking; hardcoded, arbitrary constants (char buffer[100]) aren’t usually my style. After that was found, I audited the rest of my demuxers for similar mistakes and found none. It may seem like this would only be a problem if a user directly loaded a malicious file into xine. However, since AIFF has a MIME type, and because there was a Mozilla plugin version of xine, it would have been possible to send a malicious AIFF through a web page.
The reason I was reflecting on this was due to a major security problem I found in FATE recently as I was investigating another problem. It has to do with the data logging script that receives FFmpeg build and test information from FATE clients. I’ll let my commit message to my private git repository tell the tale:
Get rid of mind-boggling security hazard that actually prints out the user's actual hash key when passed an invalid hash. This was obviously added for debugging purposes and was only triggered if a user had access to insert data for a particular configuration.
If an attacker knew a valid username, the system would cheerfully reveal the corresponding hash key if the HMAC failed. Using this vector, an attacker could have polluted the FATE database with loads of bad data. Not a huge deal in the grand scheme of things. But given that this is the only attack that the system is trying to guard against, a total failure in context.
Honestly, sometimes I can’t believe people let me anywhere near a programming environment.
One last — and fascinating — note about that AIFF exploit: It was the result of an infamous university course (perhaps this one?) given by D. J. Bernstein in which students were required to find 10 security holes in open source software programs during the term. Reportedly, all of the students failed the class since none actually found 10 holes. I don’t know if the class was ever held again.
(Just some ruminations)
I once saw a TV show where two car thieves (an older guy and a young guy) were each challenged to steal a car from a parking lot within 60 seconds. The young guy went for a shiny-looking fancy car, while the older guy went for a rust bucket.
Old guy got the car out in the time limit, younger guy didn’t.
By analogy, when trying to find security flaws in open-source software with a time constraint, it would strike me as silly to go for the big, shiny packages, and instead poke around the plethora of unknown and undermaintained software hosted on Sourceforge and Google Code..