Tag Archives: Python

Adventures in Unicode

Tangential to multimedia hacking is proper metadata handling. Recently, I have gathered an interest in processing a large corpus of multimedia files which are likely to contain metadata strings which do not fall into the lower ASCII set. This is significant because the lower ASCII set intersects perfectly with my own programming comfort zone. Indeed, all of my programming life, I have insisted on covering my ears and loudly asserting “LA LA LA LA LA! ALL TEXT EVERYWHERE IS ASCII!” I suspect I’m not alone in this.

Thus, I took this as an opportunity to conquer my longstanding fear of Unicode. I developed a self-learning course comprised of a series of exercises which add up to this diagram:



Part 1: Understanding Text Encoding
Python has regular strings by default and then it has Unicode strings. The latter are prefixed by the letter ‘u’. This is what ‘ö’ looks like encoded in each type.

>>> 'ö', u'ö'
('\xc3\xb6', u'\xf6')

A large part of my frustration with Unicode comes from Python yelling at me about UnicodeDecodeErrors and an inability to handle the number 0xc3 for some reason. This usually comes when I’m trying to wrap my head around an unrelated problem and don’t care to get sidetracked by text encoding issues. However, when I studied the above output, I finally understood where the 0xc3 comes from. I just didn’t understand what the encoding represents exactly.
Continue reading

Adjusting The Timetable and SQL Shame

My Game Music Appreciation website has a big problem that many visitors quickly notice and comment upon. The problem looks like this:



The problem is that all of these songs are 2m30s in length. During the initial import process, unless a chiptune file already had curated length metadata attached, my metadata utility emitted a default play length of 150 seconds. This is not good if you want to listen to all the songs in a soundtrack without interacting with the player page, but have various short songs (think “game over” or other quick jingles) that are over in a few seconds. Such songs still pad out 150 seconds of silence.

So I needed to correct this. Possible solutions:

  1. Manually: At first, I figured I could ask the database which songs needed fixing and listen to them to determine the proper lengths. Then I realized that there were well over 1400 games affected by this problem. This just screams “automated solution”.
  2. Automatically: Ask the database which songs need fixing and then somehow ask the computer to listen to the songs and decide their proper lengths. This sounds like a winner, provided that I can figure out how to programmatically determine if a song has “finished”.

SQL Shame
This play adjustment task has been on my plate for a long time. A key factor that has blocked me is that I couldn’t figure out a single SQL query to feed to the SQLite database underlying the site which would give me all the songs I needed. To be clear, it was very simple and obvious to me how to write a program that would query the database in phases to get all the information. However, I felt that it would be impure to proceed with the task unless I could figure out one giant query to get all the information.

This always seems to come up whenever I start interacting with a database in any serious way. I call it SQL shame. This task got some traction when I got over this nagging doubt and told myself that there’s nothing wrong with the multi-step query program if it solves the problem at hand.

Suddenly, I had a flash of inspiration about why the so-called NoSQL movement exists. Maybe there are a lot more people who don’t like trying to derive such long queries and are happy to allow other languages to pick up the slack.

Estimating Lengths
Anyway, my solution involved writing a Python script to iterate through all the games whose metadata was output by a certain engine (the one that makes the default play length 150 seconds). For each of those games, the script queries the song table and determines if each song is exactly 150 seconds. If it is, then go to work trying to estimate the true length.

The forgoing paragraph describes what I figured was possible with only a single (possibly large) SQL query.

For each song represented in the chiptune file, I ran it through a custom length estimator program. My brilliant (err, naïve) solution to the length estimation problem was to synthesize seconds of audio up to a maximum of 120 seconds (tightening up the default length just a bit) and counting how many of those seconds had all 0 samples. If the count reached 5 consecutive seconds of silence, then the estimator rewound the running length by 5 seconds and declared that to be the proper length. Update the database.

There were about 1430 chiptune files whose songs needed updates. Some files had 1 single song. Some files had over 100. When I let the script run, it took nearly 65 minutes to process all the files. That was a single-threaded solution, of course. Even though I already had the data I needed, I wanted to try to hand at parallelizing the script. So I went to work with Python’s multiprocessing module and quickly refactored it to use all 4 CPU threads on the machine where the files live. Results:

  • Single-threaded solution: 64m42s to process corpus (22 games/minute)
  • Multi-threaded solution: 18m48s with 4 CPU threads (75 games/minute)

More than a 3x speedup across 4 CPU threads, which is decent for a primarily CPU-bound operation.

Epilogue
I suspect that this task will require some refinement or manual intervention. Maybe there are songs which actually have more than 5 legitimate seconds of silence. Also, I entertained the possibility that some songs would generate very low amplitude noise rather than being perfectly silent. In that case, I could refine the script to stipulate that amplitudes below a certain threshold count as 0. Fortunately, I marked which games were modified by this method, so I can run a new script as necessary.

SQL Schema
Here is the schema of my SQlite3 database, for those who want to try their hand at a proper query. I am confident that it’s possible; I just didn’t have the patience to work it out. The task is to retrieve all the rows from the games table where all of the corresponding songs in the songs table is 150000 milliseconds.
Continue reading

Releasing GME Players and Tools

I just can’t stop living in the past. To that end, I’ve been playing around with the Game Music Emu (GME) library again. This is a software library that plays an impressive variety of special music files extracted from old video games.

I have just posted a series of GME tools and associated utilities up on Github.

Clone the repo and try them out. The repo includes a small test corpus since one of the most tedious parts about playing these files tends to be tracking them down in the first place.

Players
At first, I started with trying to write some simple command line audio output programs based on GME. GME has to be the simplest software library that it has ever been my pleasure to code against. All it took was a quick read through the gme.h header file and it was immediately obvious how to write a simple program.

First, I wrote a command line tool that output audio through PulseAudio on Linux. Then I made a second program that used ALSA. Guess what I learned through this exercise? PulseAudio is actually far easier to program than ALSA.

I also created an SDL player, seen in my last post regarding how to write an oscilloscope. I think I have the A/V sync correct now. It’s a little more fun to use than the command line tools. It also works on non-Linux platforms (tested at least on Mac OS X).

Utilities
I also wrote some utilities. I’m interested in exporting metadata from these rather opaque game music files in order to make them a bit more accessible. To that end, I wrote gme2json, a program that uses the GME library to fetch data from a game music file and then print it out in JSON format. This makes it trivial to extract the data from a large corpus of game music files and work with it in many higher level languages.

Finally, I wrote a few utilities that repack certain ad-hoc community-supported game music archives into… well, an ad-hoc game music archive of my own device. Perhaps it’s a bit NIH syndrome, but I don’t think certain of these ad-hoc community formats were very well thought-out, or perhaps made sense a decade or more ago. I guess I’m trying to bring a bit of innovation to this archival process.

Endgame
I haven’t given up on that SaltyGME idea (playing these game music files directly in a Google Chrome web browser via Google Chrome). All of this ancillary work is leading up to that goal.

Silly? Perhaps. But I still think it would be really neat to be able to easily browse and play these songs, and make them accessible to a broader audience.

Weighted Moving Averages

Everyone would like to see FFmpeg performance graphs based on data collected from FATE. One day, I got down to analyzing the millisecond runtime data for one configuration regarding one of the longer tests in the suite. The numbers were all over the map and directly graphing them would prove confusing and meaningless.

Then I had an idea which draws on my limited experience with digital signal processing as it pertains to multimedia. What about plotting points that represent the average of point n along with, say, points (n-15)..(n+15)? Then I refined the idea a bit– how about multiplying each point by a factor such that point n gets the most consideration while points further away from n receive progressively less consideration?

And then I quickly covered up all evidence of the brainstorm for fear of catching a beatdown from a tough gang of statisticians for devising the most idiotic idea in the history of statistics. Imagine my surprise when I was recently reading up on a completely different topic and found the exact same techniques described in practical application. It turns out they even have formal names: Moving averages, in which various forms of weighted moving averages comprise a sub-category.

This is not the first time I have re-invented something that I would later learn is called a “wheel” (the first time I can recall was when I independently figured out a bubble sort algorithm). I’m sure a spreadsheet program can be coerced to massage the data in this manner, but I decided to go the Python route; I have to use Python to effectively extract the data from MySQL into a usable form anyway.

I created a Python script that takes a list of FATE configuration IDs and a test spec number. For each configuration ID, gather the history of millisecond run times for the specified test. Run a weighted moving average using the current run time and the 9 run times prior. The current run time is multiplied by 10, (n-1) is multiplied by 9, down through (n-9) which is multiplied by 1. Divide the average by 55 (10+9+…+1). For a more elegant mathematical explanation, see the Wikipedia entry. This script allows me to dump all the performance data for a series of configurations like, for example, all of the PowerPC configurations that run on the same machine.

Then, we turn the data over to OpenOffice Calc for graphing… <sigh…>


id RoQ video encode test, run on many successive revisions on PowerPC

Dear OpenOffice.org: I hate you so much. Sometimes you elect to plot data on a graph in a sane range and sometimes you opt to begin from 0. The latter is shown above, where the former would have been much more appropriate. As it stands, any useful data to be gathered from the visualizing the trend of the weighted moving averages is lost. And don’t even get me started (again) on your fragility. Or your atrocious software update mechanism.

Anyway, the above graph shows performance over time for the idroq-video-encode test run on various PowerPC configurations. The graph actually does reveal at least one oddity– the orange pulse wave that represents gcc 4.1.2’s performance over time. That might be worth looking into, particularly since it’s on the high part of the wave right now.

Below is the same thing, only with data collected from running h264-conformance-mv1_brcm_d which is the most computationally intensive H.264 conformance sample in the FATE suite. And it looks like I mislabeled ‘gcc 4.3.3’ as ‘gcc 4.3.2’ in both graphs. Oh well; not going through the pain of regenerating those graphs with OpenOffice now.


long H.264 conformance test, run on many successive revisions on PowerPC" title="long H.264 conformance test, run on many successive revisions on PowerPC

What to do about performance graphs going forward? I really don’t think it would be worthwhile to be able to pull graphs from the database relating to performance data over time for arbitrary tests. Most of the tests are simply too short to yield any useful trends. This would be better suited to the idea of running certain tasks less frequently. Various configurations should be made to run longevity tests on full movies encoded in various formats, once every day or 2. CPU runtime performance data (in contrast to wall clock time data) should be collected on those runs and graphed for analysis.

BTW, if anyone knows of better desktop graphing apps, do let me know. Any system, I guess. As long as it can read a CSV file and create a competent graph in a reasonable amount of time on a 2 GHz CPU, I should be able to tolerate it.