Wikipedia’s knowledge of compilers credits the first compiler to Grace Hopper in 1952 (for a language called A-0). I suspect that if blogs existed in 1952, we would have been treated to rants such as:
I, personally, have a problem with a developer who feels entitled to be able to develop for a computer without investing the time to learn the machine opcodes or punch card formats that the machine was built around.
This is one of the arguments I have been hearing this past week after my employer announced an upcoming method for exporting Flash/AS3 projects as iPhone apps. It strikes me as an age-old argument between low vs. high level languages, that’s all. I chortle when recalling how certain people urged me to construct the FATE system in POSIX-complaint, ANSI C for maximum portability and speed instead of using a language like Python. Nowadays, that Python code is testing FFmpeg on a dozen different CPUs running 10 different operating systems. It makes me shudder to think of how much work it would have been to write the FATE script in straight C and how little benefit doing so would have brought.
In other groundbreaking iPhone news, Mans recently announced that FFmpeg can be built for the iPhone out of the SVN tree with only a minor modification to Apple’s iPhone toolchain (call the SDK police!). This is a feat that has thus far proved challenging, as Mans outlined here. I understand it will be a little difficult to continuously test FFmpeg on either a real iPhone or an emulator. However, I’m planning a revision to FATE’s architecture so that certain configurations can be marked “build-only” and forgo the test phase. This will also be useful for Hitachi SH-4 and perhaps other architectures that FFmpeg supports but for which we don’t have access to hardware for the sake of continuous testing.
Whenever the notion of compiling and running FFmpeg on the iPhone crops up, it prompts me to wonder why. Why do people care about this? Are they transcoding media on the iPhone? Are they republishing old games and using FFmpeg’s numerous game-oriented decoders for direct playback instead of doing the sensible thing and transcoding the original media to MP4/CAF/H.264/AAC for native playback through the platform’s frameworks and hardware acceleration? Is it just a point of academic curiosity thanks to the fact that FFmpeg is quickly becoming a standardized metric of compiler quality? Why?
Is all the difficulty of running FATE tests in the iPhone the lack of a decent remote file system? Would replacing the ffmpeg binary by a script that downloads the input sample by ftp from a local server, run the test and then delete the file solve it?
David Conrad’s email response to me on the matter seems to indicate that the net fs issue is blocking the possibility.
Your suggestion got me thinking about how the FATE test suite isn’t that large in the first place (200 MB? I haven’t checked recently), at least, not in the grand scheme of how much storage an iPhone has to offer. I wonder if it would be possible to download all of the samples to the iPhone in advance in such a way that the test app could access them? Alternatively, package all the samples into the downloaded binary (it’ll be a bit big, but if it works, it works).
269MB for fate-suite and more 352MB for tests/data, still much smaller than the mp3 collection in the typical iphone ;)
“make test” in the other hand will be somewhat trickier to run with no NFS (but it looks like David already found a way).
Well, FATE is not exactly comparable to iPhone apps.
When you are building a system like FATE that is only going to run on rather powerful hardware (anything that can compile motion_est.c IMHO counts as such) and on very varied platforms you do not know in advance and performance is certain to really be a non-issue, that is a very different situation from developing an iPhone app…
And I do think that developing for such a platform without really knowing and understanding it is going to be a lot of pain in the long term. Of course a lot of people get along well without knowing assembler, but there is no question that as soon as a compiler bug crops up, or the compiler is just generating very bad and slow code or your algorithm is unsuitable for the hardware architecture you’ll be lost and/or waste a lot of time when e.g. one look at the asm would have made it obvious – and such a situation will almost certainly come up “somewhen”.