Category Archives: FATE Server

Lots of FATE Compiler Maintenance

I was working like a madman on FATE’s compilers tonight. Let’s see:

  • Added gcc 4.3.4 to all of my Linux configurations (x86_32, x86_64, and PPC), decommissioning 4.3.3
  • Added gcc 4.4.1 to Linux/32- and 64-bit x86, decommissioning 4.4.0 (had already done this for PPC some time ago)
  • Upgraded the gcc SVN snapshots for all 3 Linux machines
  • Upgraded the LLVM SVN snapshots for 32- and 64-bit x86 Linux; this does not solve the build problem for 64-bit (remind me again how LLVM will save us from the tyranny of gcc); Update: solved; thanks for the help in the comments
  • Temporarily solved the Intel C Compiler conflict with ccache by disabling ccache when building with icc, thereby crippling build capacity but keeping the builds honest

This all feels like it was way more work than it should have been. Opportunities for improvement abound, starting with my plot for auto-recompiling the gcc and LLVM snapshots and automatically placing the snapshots into service as outlined in a previous post. It should just take an evening if I can get down to it.

I know I have outstanding requests to add LLVM (32- and 64-bit) for Intel Mac. That sounds reasonable, and it would be great to hook up the auto-update script to that LLVM process as well.

Ramping Up On JavaScript

I didn’t think I would ever have sufficient motivation to learn JavaScript, but here I am. I worked a little more on that new FATE index page based on Google’s Visualization API. To that end, I constructed the following plan:

Part 1: Create A JSON Data Source
Create a JSON data source, now that I figured out how to do that correctly. JSON data really is just a JavaScript data structure. It can be crazy to look at since it necessitates packing dictionaries inside of arrays inside of dictionaries inside of arrays. (Check the examples– observe that the data structure ends with “}]}]}});”.) But in the end, the Google visualization knows what to do with it.

Done.

Part 2: Connect the JSON Data Source
Hook the JSON data source up to the newest revision of the FATE front page, rolled out a little while ago.

Done.

Part 3: Save The User’s Most Recent Sort Criteria
The problem is that the page resets the sort criteria on a refresh. There needs to be a way to refresh the page while maintaining that criteria. This leads me to think that I should have some “refresh” button embedded in the page which asks the server for updated data using a facility I have heard of named XMLHttpRequest. I found a simple tutorial on the matter but was put off by the passage “Because of variations among the Web browsers, creating this object is more complicated than it need be.”

Backup idea: Cookies. Using this tutorial as a guide, set a cookie whenever the user changes either the sort column or the sort order.

Done, though I may want to revisit the XHR idea one day.

Part 4: Make It Look Good
Finally, figure out how the div tag works to make the layout a little cleaner.

Done. Sort of. There are 2 div tags on the page now, one for the header and one for the table. I suppose I will soon have to learn CSS to really drag this page out of 1997.

Bonus: Caching the JSON Data
Ideally, the web browser makes the JSON data request using the If-Modified-Since HTTP header. Use a sniffer to verify this. If this is true, add a table to the FATE MySQL table which contains a single column specifying the timestamp when the web config cache table was last updated. If this time is earlier than the time in the request header, respond with a 304 (not modified) HTTP code.

Not done. It seems that these requests don’t set the appropriate HTTP header, at least not in Firefox.

I hope to put this page into service soon, just as soon as I can dump the rest of the existing index.php script into this new one. As usual, not elegant, but it works.

Google Visualizing FATE

I guess that Cloud Computing stuff doesn’t only apply to data storage. There are also things like Google’s Visualization API for manipulating and presenting data. In this paradigm, the data is under my control but the code to manipulate it lives on Google’s servers.

Good or bad? That’s up for debate, but the table visualization definitely caught my eye. Look at the experimental results when I put FATE data into the table. Notice how easy it is to sort by columns (the default sort is such that the failed builds float to the top). I may be a little too close to the situation, but I think it’s a little better than my last attempt. Again, no more up-to-15-minute delay with this system; new build data is available for presentation as soon as it is submitted to the database.

Let me know what you think. Personally, I think we may have a winner here. Maybe Google’s other visualizations (assorted graphs and such) could be just the thing we have been searching for in order to plot trends like performance and code size.

I just wish I could understand the data source wire protocol. As it stands, the index-v3.php script generates JavaScript on the fly to populate the table. It would be a bit more elegant if the data were provided by a separate script. But, hey, this works.

Left ARM vs. Right ARM

Måns went and got one of those Sheeva Plugs— the wall-wart form factor computer that is a self-contained ARM-based computer. Of course it’s already in service doing FATE duty. This is an ARMv5TE CPU which is in contrast the the ARMv7 series on the Beagle Board. This is why there are 2 blocks of ARM results on the FATE page.

In other FATE news, I activated 10 new tests tonight: v210, for the V210 10-bit YUV format; and 9 more fidelity range extension H.264 conformance vectors.