{"id":517,"date":"2008-02-18T17:44:10","date_gmt":"2008-02-19T01:44:10","guid":{"rendered":"http:\/\/multimedia.cx\/eggs\/ambitious-testing-effort\/"},"modified":"2008-02-18T20:36:19","modified_gmt":"2008-02-19T04:36:19","slug":"ambitious-testing-effort","status":"publish","type":"post","link":"https:\/\/multimedia.cx\/eggs\/ambitious-testing-effort\/","title":{"rendered":"Ambitious Testing Effort"},"content":{"rendered":"<p>When the <a href=\"http:\/\/fate.multimedia.cx\/\">FATE<\/a> initiative went live, I asked the <a href=\"http:\/\/guru.multimedia.cx\/\">guru<\/a> how to handle the H.264 conformance suite samples&#8211; should I just dump all of them into the database whether they were working or not, or should I only enter the samples that worked with the current version of <a href=\"http:\/\/ffmpeg.org\/\">FFmpeg<\/a>? His answer was far more complicated than I could have anticipated:<\/p>\n<ol>\n<li>Enter all currently working samples<\/li>\n<li>If a particular H.264 conformance vector <em>used to work<\/em> with FFmpeg, add the sample and enter a new issue in the tracker<\/li>\n<li>Otherwise, don&#8217;t add the test yet<\/li>\n<\/ol>\n<p>Whoa. As you know, <a href=\"http:\/\/multimedia.cx\/eggs\/51-h264-tests\/\">I got task #1 accomplished<\/a> relatively easily. Now I&#8217;m back to take on task #2.<\/p>\n<p>Hypothesis: most of the code that can make or break the H.264 decoding process lives in files named libavcodec\/h264*. Thus, test the sample suite against every single FFmpeg revision in which one of those files was touched.<\/p>\n<pre>\r\n  for file in libavcodec\/h264*; do svn log $file; done | \r\n  grep \"^r.*lines$\" | \r\n  sed -e 's\/^r\\([0-9]*\\).*$\/\\1\/' | \r\n  sort -n | \r\n  uniq\r\n<\/pre>\n<p>That produces just over 400 different FFmpeg revisions that need testing. I had better get started early.<\/p>\n<p>Algorithm outline:<\/p>\n<ul>\n<li>create a script that takes the above revision list and the directory full of H.264 conformance vectors<\/li>\n<li>create a list of standard test names based on the convention already in the database<\/li>\n<li>query the database to obtain a complete list of all tests known to work currently<\/li>\n<li>remove the working tests from the list of all tests<\/li>\n<li>for each revision:<\/li>\n<li>\n<ul>\n<li>get the FFmpeg code corresponding to that revision<\/li>\n<li>build FFmpeg, and use <a href=\"http:\/\/ccache.samba.org\/\">ccache<\/a> to hopefully gain a little speedup in the process<\/li>\n<li>test FFmpeg against all of the non-working samples, output results in a CSV format: &#8220;revision, 0, 0, 0, 1, 0, 0, 0, 0, 0,&#8230;&#8221;; this should facilitate analysis and serve to illustrate that the non-working samples have been broken from the get-go<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Hey, computing cycles are cheap, right? Perhaps the same ambitious strategy can be employed as a one-time brute force method to learn when other FFmpeg components broke so that they can be fixed and subsequently tested via FATE. And there&#8217;s no reason I have to do this on my own; I know certain FFmpeg developers who like to brag about their cumulative 27 or so underworked CPU cores laying around their flats (you devs know who you are).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When the FATE initiative went live, I asked the guru how to handle the H.264 conformance suite samples&#8211; should I just dump all of them into the database whether they were working or not, or should I only enter the samples that worked with the current version of FFmpeg? His answer was far more complicated [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[101],"tags":[],"class_list":["post-517","post","type-post","status-publish","format-standard","hentry","category-fate-server"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/multimedia.cx\/eggs\/wp-json\/wp\/v2\/posts\/517","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/multimedia.cx\/eggs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/multimedia.cx\/eggs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/multimedia.cx\/eggs\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/multimedia.cx\/eggs\/wp-json\/wp\/v2\/comments?post=517"}],"version-history":[{"count":0,"href":"https:\/\/multimedia.cx\/eggs\/wp-json\/wp\/v2\/posts\/517\/revisions"}],"wp:attachment":[{"href":"https:\/\/multimedia.cx\/eggs\/wp-json\/wp\/v2\/media?parent=517"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/multimedia.cx\/eggs\/wp-json\/wp\/v2\/categories?post=517"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/multimedia.cx\/eggs\/wp-json\/wp\/v2\/tags?post=517"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}