Breaking Eggs And Making Omelettes

Topics On Multimedia Technology and Reverse Engineering


Archives:

Evolution of Multimedia Fiefdoms

September 30th, 2014 by Multimedia Mike

I want to examine how multimedia fiefdoms have risen and fallen through the years.


Medieval Castle

Back in the day, the multimedia fiefdoms were built around the formats put forth by competing companies: there was Microsoft/WMV, Apple/MOV, and Real/RM as the big contenders. On2 always wanted to be a player in this arena but could never quite catch a break. A few brave contenders held the line for open source and also for the power users who desired one application that could handle everything (my original motivation for wanting to get into multimedia hacking).

The computer desktop was the battleground for internet-based media stream. Whatever happened to those days? Actually, if memory serves, Flash-based video streaming stepped on all of them.

Over the last 6-7 years, the battleground has expanded to cover mobile devices, where Flash’s impact has… lessened. During this time, multimedia technology pretty well standardized on a particular stack, namely, the MPEG (MP4/H.264/AAC) stack.

The belligerents in this war tried for years to effectively penetrate new territory, namely, the living room where the television lived. This had been slowgoing for years due to various user interface and content issues, but steadily improved.

Last April, Amazon announced their entry into the set-top box market with the Fire TV. That was when it suddenly crystallized for me that the multimedia ecosystem has radically shifted. Now, the multimedia fiefdoms revolve around access to content via streaming services.

Off the top of my head, here are some of the fiefdoms these days (fiefdoms I have experience using):

  • Netflix (subscription streaming)
  • Amazon (subscription, rental, and purchased streaming)
  • Hulu Plus (subscription streaming)
  • Apple (rental and purchased media)

I checked some results on Can I Stream.It? (which I refer to often) and found a bunch more streaming fiefdoms such as Google (both Play and YouTube, which are separate services), Sony, Xbox 360, Crackle, Redbox Instant, Vudu, Target Ticket, Epix, Sony, SnagFilms, and XFINITY StreamPix. And surely, these are probably just services available in the United States; I know other geographical regions have their own fiefdoms.

What happened?

When I got into multimedia hacking, there were all these disparate, competing ecosystems. As a consumer, I didn’t care where the media came from, I just wanted to play it. That’s what inspired me to work on open source multimedia projects. Now I realize that I have the same problem 10-15 years later: there are multiple competing ecosystems. I might subscribe to fiefdoms X and Y, but am frustrated to learn that something I’d like to watch is only available through fiefdom Z. Very few of these fiefdoms can be penetrated using open source technology.

I’m not really sure about the point about this whole post. Multimedia technology seems really standardized these days. But that’s probably just my perspective because I have spent way too long focusing on a few areas of multimedia technology such as audio and video coding. It’s interesting that all these services probably leverage the same limited number of codecs. Their differentiation comes from the catalog of content that each is able to license for streaming. There are different problems to solve in the multimedia arena now.

Posted in General | 1 Comment »

Visualizing Call Graphs Using Gephi

August 31st, 2014 by Multimedia Mike

When I was at university studying computer science, I took a basic chemistry course. During an accompanying lab, the teaching assistant chatted me up and asked about my major. He then said, “Computer science? Well, that’s just typing stuff, right?”

My impulsive retort: “Sure, and chemistry is just about mixing together liquids and coming up with different colored liquids, as seen on the cover of my high school chemistry textbook, right?”


Chemistry fun

In fact, pure computer science has precious little to do with typing (as is joked in CS circles, computer science is about computers in the same way that astronomy is about telescopes). However, people who study computer science often pursue careers as programmers, or to put it in fancier professional language, software engineers.

So, what’s a software engineer’s job? Isn’t it just typing? That’s where I’ve been going with this overly long setup. After thinking about it for long enough, I like to say that a software engineer’s trade is managing complexity.

A few years ago, I discovered Gephi, an open source tool for graph and data visualization. It looked neat but I didn’t have much use for it at the time. Recently, however, I was trying to get a better handle on a large codebase. I.e., I was trying to manage the project’s complexity. And then I thought of Gephi again.

Prior Work
One way to get a grip on a large C codebase is to instrument it for profiling and extract details from the profiler. On Linux systems, this means compiling and linking the code using the -pg flag. After running the executable, there will be a gmon.out file which is post-processed using the gprof command.

GNU software development tools have a reputation for being rather powerful and flexible, but also extremely raw. This first hit home when I was learning how to use the GNU tool for code coverage — gcov — and the way it outputs very raw data that you need to massage with other tools in order to get really useful intelligence.

And so it is with gprof output. The output gives you a list of functions sorted by the amount of processing time spent in each. Then it gives you a flattened call tree. This is arranged as “during the profiled executions, function c was called by functions a and b and called functions d, e, and f; function d was called by function c and called functions g and h”.

How can this call tree data be represented in a more instructive manner that is easier to navigate? My first impulse (and I don’t think I’m alone in this) is to convert the gprof call tree into a representation suitable for interpretation by Graphviz. Unfortunately, doing so tends to generate some enormous and unwieldy static images.

Feeding gprof Data To Gephi
I learned of Gephi a few years ago and recalled it when I developed an interest in gaining better perspective on a large base of alien C code. To understand what this codebase is doing for a particular use case, instrument it with gprof, gather execution data, and then study the code paths.

How could I feed the gprof data into Gephi? Gephi supports numerous graphing formats including an XML-based format named GEXF.

Thus, the challenge becomes converting gprof output to GEXF.

Which I did.

Demonstration
I have been absent from FFmpeg development for a long time, which is a pity because a lot of interesting development has occurred over the last 2-3 years after a troubling period of stagnation. I know that 2 big video codec developments have been HEVC (next in the line of MPEG codecs) and VP9 (heir to VP8′s throne). FFmpeg implements them both now.

I decided I wanted to study the code flow of VP9. So I got the latest FFmpeg code from git and built it using the options "--extra-cflags=-pg --extra-ldflags=-pg". Annoyingly, I also needed to specify "--disable-asm" because gcc complains of some register allocation snafus when compiling inline ASM in profiling mode (and this is on x86_64). No matter; ASM isn’t necessary for understanding overall code flow.

After compiling, the binary ‘ffmpeg_g’ will have symbols and be instrumented for profiling. I grabbed a sample from this VP9 test vector set and went to work.

./ffmpeg_g -i vp90-2-00-quantizer-00.webm -f null /dev/null
gprof ./ffmpeg_g > vp9decode.txt
convert-gprof-to-gexf.py vp9decode.txt > ~/bigdisk/vp9decode.gexf

Gephi loads vp9decode.gexf with no problem. Using Gephi, however, can be a bit challenging if one is not versed in any data exploration jargon. I recommend this Gephi getting starting guide in slide deck form. Here’s what the default graph looks like:


gprof-ffmpeg-gephi-1

Not very pretty or helpful. BTW, that beefy arrow running from mid-top to lower-right is the call from decode_coeffs_b -> iwht_iwht_4x4_add_c. There were 18774 from the former to the latter in this execution. Right now, the edge thicknesses correlate to number of calls between the nodes, which I’m not sure is the best representation.

Read the rest of this entry »

Posted in General | 1 Comment »

Vedanti and Max Sound vs. Google

August 13th, 2014 by Multimedia Mike

Vedanti Systems Limited (VSL) and Max Sound Coporation filed a lawsuit against Google recently. Ordinarily, I wouldn’t care about corporate legal battles. However, this one interests me because it’s multimedia-related. I’m curious to know how coding technology patents might hold up in a real court case.

Here’s the most entertaining complaint in the lawsuit:

Despite Google’s well-publicized Code of Conduct — “Don’t be Evil” — which it explains is “about doing the right thing,” “following the law,” and “acting honorably,” Google, in fact, has an established pattern of conduct which is the exact opposite of its claimed piety.

I wonder if this is the first known case in which Google has been sued over its long-obsoleted “Don’t be evil” mantra?

Researching The Plaintiffs
Read the rest of this entry »

Posted in Legal/Ethical | 12 Comments »

Server Move For multimedia.cx

July 31st, 2014 by Multimedia Mike

I made a big change to multimedia.cx last week: I moved hosting from a shared web hosting plan that I had been using for 10 years to a dedicated virtual private server (VPS). In short, I now have no one to blame but myself for any server problems I experience from here on out.

The tipping point occurred a few months ago when my game music search engine kept breaking regardless of what technology I was using. First, I had an admittedly odd C-based CGI solution which broke due to mysterious binary compatibility issues, the sort that are bound to occur when trying to make a Linux binary run on heterogeneous distributions. The second solution was an SQLite-based solution. Like the first solution, this worked great until it didn’t work anymore. Something else mysteriously broke vis-à-vis PHP and SQLite on my server. I started investigating a MySQL-based full text search solution but couldn’t make it work, and decided that I shouldn’t have to either.

Ironically, just before I finished this entire move operation, I noticed that my SQLite-based FTS solution was working again on the old shared host. I’m not sure when that problem went away. No matter, I had already thrown the switch.

How Hard Could It Be?
We all have thresholds for the type of chores we’re willing to put up with and which we’d rather pay someone else to perform. For the past 10 years, I felt that administering a website’s underlying software is something that I would rather pay someone else to worry about. To be fair, 10 years ago, I don’t think VPSs were a thing, or at least a viable thing in the consumer space, and I wouldn’t have been competent enough to properly administer one. Though I would have been a full-time Linux user for 5 years at that point, I was still the type to build all of my own packages from source (I may have still been running Linux From Scratch 10 years ago) which might not be the most tractable solution for server stability.

These days, VPSs are a much more affordable option (easily competitive with shared web hosting). I also realized I know exactly how to install and configure all the software that runs the main components of the various multimedia.cx sites, having done it on local setups just to ensure that my automated backups would actually be useful in the event of catastrophe.

All I needed was the will to do it.

The Switchover Process
Here’s the rough plan:

  • Investigate options for both VPS providers and mail hosts– I might be willing to run a web server but NOT a mail server
  • Start plotting several months in advance of my yearly shared hosting renewal date
  • Screw around for several months, playing video games and generally finding reasons to put off the move
  • Panic when realizing there are only a few days left before the yearly renewal comes due

So that’s the planning phase. BTW, I chose Digital Ocean for VPS and Zoho for email hosting. Here’s the execution phase I did last week:

  • Register with Digital Ocean and set up DNS entries to point to the old shared host for the time being
  • Once the D-O DNS servers respond correctly using a manual ‘dig’ command, use their servers as the authoritative ones for multimedia.cx
  • Create a new Droplet (D-O VPS), install all the right software, move the databases, upload the files; and exhaustively document each step, gotcha, and pitfall; treat a VPS as necessarily disposable and have an eye towards iterating the process with a new VPS
  • Use /etc/hosts on a local machine to point DNS to the new server and verify that each site is working correctly
  • After everything looks all right, update the DNS records to point to the new server

Finally, flip the switch on the MX record by pointing it to the new email provider.

Improvements and Problems
Hosting on Digital Ocean is quite amazing so far. Maybe it’s the SSDs. Whatever it is, all the sites are performing far better than on the old shared web host. People who edit the MultimediaWiki report that changes get saved in less than the 10 or so seconds required on the old server.

Again, all problems are now my problems. A sore spot with the shared web host was general poor performance. The hosting company would sometimes complain that my sites were using too much CPU. I would have loved to try to optimize things. However, the cPanel interface found on many shared hosts don’t give you a great deal of data for debugging performance problems. However, same sites, same software, same load on the VPS is considerably more performant.

Problem: I’ve already had the MySQL database die due to a spike in usage. I had to manually restart it. I was considering a cron-based solution to check if the server is running and restart it if not. In response to my analysis that my databases are mostly read and not often modified, so db crashes shouldn’t be too disastrous, a friend helpfully reminded me that, “You would not make a good sysadmin with attitudes like ‘an occasional crash is okay’.”

To this end, I am planning to migrate the database server to a separate VPS. This is a strategy that even Digital Ocean recommends. I’m hoping that the MySQL server isn’t subject to such memory spikes, but I’ll continue to monitor it after I set it up.

Overall, the server continues to get modest amounts of traffic. I predict it will remain that way unless Dark Shikari resurrects the x264dev blog. The biggest spike that multimedia.cx ever saw was when Steve Jobs linked to this WebM post.

Dropped Sites
There are a bunch of subdomains I dropped because I hadn’t done anything with them for years and I doubt anyone will notice they’re gone. One notable section that I decided to drop is the samples.mplayerhq.hu archive. It will live on, but it will be hosted by samples.ffmpeg.org, which had a full mirror anyway. The lower-end VPS instances don’t have the 53 GB necessary.

Going Forward
Here’s to another 10 years of multimedia.cx, even if multimedia isn’t as exciting as it was 10 years ago (personal opinion; I’ll have another post on this later). But at least I can get working on some other projects now that this is done. For the past 4 months or so, whenever I think of doing some other project, I always remembered that this server move took priority over everything else.

Posted in General | 4 Comments »

Reverse Engineering Italian Literature

June 30th, 2014 by Multimedia Mike

Some time ago, Diego “Flameeyes” Pettenò tried his hand at reverse engineering a set of really old CD-ROMs containing even older Italian literature. The goal of this RE endeavor would be to extract the useful literature along with any structural metadata (chapters, etc.) and convert it to a more open format suitable for publication at, e.g., Project Gutenberg or Archive.org.

Unfortunately, the structure of the data thwarted the more simplistic analysis attempts (like inspecting for blocks of textual data). This will require deeper RE techniques. Further frustrating the effort, however, is the fact that the binaries that implement the reading program are written for the now-archaic Windows 3.1 operating system.

In pursuit of this RE goal, I recently thought of a way to glean more intelligence using DOSBox.

Prior Work
There are 6 discs in the full set (distributed along with 6 sequential issues of a print magazine named L’Espresso). Analysis of the contents of the various discs reveals that many of the files are the same on each disc. It was straightforward to identify the set of files which are unique on each disc. This set of files all end with the extension “LZn”, where n = 1..6 depending on the disc number. Further, the root directory of each disc has a file indicating the sequence number (1..6) of the CD. Obviously, these are the interesting targets.

The LZ file extensions stand out to an individual skilled in the art of compression– could it be a variation of the venerable LZ compression? That’s actually unlikely because LZ — also seen as LIZ — stands for Letteratura Italiana Zanichelli (Zanichelli’s Italian Literature).

The Unix ‘file’ command was of limited utility, unable to plausibly identify any of the files.

Progress was stalled.

Saying Hello To An Old Frenemy
I have been showing this screenshot to younger coworkers to see if any of them recognize it:


DOSBox running Window 3.1

Not a single one has seen it before. Senior computer citizen status: Confirmed.

I recently watched an Ancient DOS Games video about Windows 3.1 games. This episode showed Windows 3.1 running under DOSBox. I had heard this was possible but that it took a little work to get running. I had a hunch that someone else had probably already done the hard stuff so I took to the BitTorrent networks and quickly found a download that had the goods ready to go– a directory of Windows 3.1 files that just had to be dropped into a DOSBox directory and they would be ready to run.

Aside: Running OS software procured from a BitTorrent network? Isn’t that an insane security nightmare? I’m not too worried since it effectively runs under a sandboxed virtual machine, courtesy of DOSBox. I suppose there’s the risk of trojan’d OS software infecting binaries that eventually leave the sandbox.

Using DOSBox Like ‘strace’
strace is a tool available on some Unix systems, including Linux, which is able to monitor the system calls that a program makes. In reverse engineering contexts, it can be useful to monitor an opaque, binary program to see the names of the files it opens and how many bytes it reads, and from which locations. I have written examples of this before (wow, almost 10 years ago to the day; now I feel old for the second time in this post).

Here’s the pitch: Make DOSBox perform as strace in order to serve as a platform for reverse engineering Windows 3.1 applications. I formed a mental model about how DOSBox operates — abstracted file system classes with methods for opening and reading files — and then jumped into the source code. Sure enough, the code was exactly as I suspected and a few strategic print statements gave me the data I was looking for.

Eventually, I even took to running DOSBox under the GNU Debugger (GDB). This hasn’t proven especially useful yet, but it has led to an absurd level of nesting:


GDB runs DOSBox runs Windows 3.1

The target application runs under Windows 3.1, which is running under DOSBox, which is running under GDB. This led to a crazy situation in which DOSBox had the mouse focus when a GDB breakpoint was triggered. At this point, DOSBox had all desktop input focus and couldn’t surrender it because it wasn’t running. I had no way to interact with the Linux desktop and had to reboot the computer. The next time, I took care to only use the keyboard to navigate the application and trigger the breakpoint and not allow DOSBox to consume the mouse focus.

New Intelligence
Read the rest of this entry »

Posted in Reverse Engineering | 16 Comments »

Playing With Emscripten and ASM.js

February 28th, 2014 by Multimedia Mike

The last 5 years or so have provided a tremendous amount of hype about the capabilities of JavaScript. I think it really kicked off when Google announced their Chrome web browser in September, 2008 along with its V8 JS engine. This seemed to spark an arms race in JS engine performance along with much hyperbole that eventually all software could, would, and/or should be written in straight JavaScript for maximum portability and future-proofing, perhaps aided by Emscripten, a tool which magically transforms C and C++ code into JS. The latest round of rhetoric comes courtesy of something called asm.js which purports to narrow the gap between JS and native code performance.

I haven’t been a believer, to express it charitably. But I wanted to be certain, so I set out to devise my own experiment to test modern JS performance.

Up Front Summary
I was extremely surprised that my experiment demonstrated JS performance FAR beyond my expectations. There might be something to these claims of magnficent JS speed in numerical applications. Basically, here were my thoughts during the process:

  • There’s no way that JavaScript can come anywhere close to C performance for a numerically intensive operation; a simple experiment should demonstrate this.
  • Here’s a straightforward C program to perform a simple yet numerically intensive operation.
  • Let’s compile the C program on gcc and get some baseline performance numbers.
  • Let’s use Emscripten to convert the C program to JavaScript and run it under Chrome.
  • Ha! Pitiful JS performance, just as I expected!
  • Try the same program under Firefox, since Firefox is supposed to have some crazy optimization for asm.js code, allegedly emitted by Emscripten.
  • LOL! Firefox performs even worse than Chrome!
  • Wait a minute… the Emscripten documentation mentioned using optimization levels for generating higher performance JS, so try ‘-O1′.
  • Umm… wow: Chrome’s performance increased dramatically! What about Firefox? Not only is Firefox faster than Chrome, it’s faster than the gcc-generated code!
  • As my faith in C is suddenly shaken to its core, I remembered to compile the gcc version with an explicit optimization level. The native C version pulled ahead of Firefox again, but the Firefox code is still close.
  • Aha! This is just desktop– but what about mobile? One of the leading arguments for converting everything to pure JavaScript is that such programs will magically run perfectly in mobile browsers. So I wager that this is where the experiment will fall over.
  • I proceed to try the same converted program on a variety of mobile platforms.
  • The mobile platforms perform rather admirably as well.
  • I am surprised.

The Experiment
I wanted to run a simple yet numerically-intensive and relevant benchmark, and something I am familiar with. I settled on JPEG image decoding. Again, I wanted to keep this simple, ideally in a single file because I didn’t know how hard it might be to deal with Emscripten. I found NanoJPEG, which is a straightforward JPEG decoder contained in a single C file.
Read the rest of this entry »

Posted in General | 7 Comments »

Long Overdue MediaWiki Upgrade

February 4th, 2014 by Multimedia Mike

What do I do? What I do? This library book is 42 years overdue!
I admit that it’s mine, yet I can’t pay the fine,
Should I turn it in or should I hide it again?
What do I do? What do I do?

I internalized the forgoing paean to the perils of procrastination by Shel Silverstein in my formative years. It’s probably why I’ve never paid a single cent in late fees in my entire life.

However, I have been woefully negligent as the steward of the MediaWiki software that drives the world famous MultimediaWiki, the internet’s central repository of obscure technical knowledge related to multimedia. It is currently running of version 1.6 software. The latest version is 1.22.

The Story So Far
According to my records, I first set up the wiki late in 2005. I don’t know which MediaWiki release I was using at the time. I probably conducted a few upgrades in the early days, but that went by the wayside perhaps in 2007. My web host stopped allowing shell access and the MediaWiki upgrade process pretty much requires running a PHP script from a command line. Upgrade time came around and I put off the project. Weeks turned into months turned into years until, according to some notes, the wiki abruptly stopped working in July, 2011. Suddenly, there were PHP errors about “Namespace” being a reserved word.

While I finally laid out a plan to upgrade the wiki after all these years, I eventually found that the problem had been caused when my webhost upgraded from PHP 5.2 -> 5.3. I also learned of a small number of code changes that caused the problem to go away, thus kicking the can down the road once more.

Then a new problem showed up last week. I think it might be related to a new version of PHP again. This time, a few other things on my site broke, and I learned that my webhost now allows me to select a PHP version to use (with the version then set to “auto”, which didn’t yield much information). Rolling back to an earlier version of PHP might have solved the problem easily.

But NO! I made the determination that this goes no further. I want this wiki upgraded.

The Arduous Upgrade Path
There are 2 general upgrade paths I can think of:
Read the rest of this entry »

Posted in General | 4 Comments »

Chrome’s New Audio Notifier

January 29th, 2014 by Multimedia Mike

Version 32 of Google’s Chrome web browser introduced this nifty feature:


Chrome audio notifier icon

When a browser tab has an element that is producing audio, the browser’s tab shows the above audio notification icon to inform the user. I have seen that people have a few questions about this, specifically:

  1. How does this feature work?
  2. Why wasn’t this done sooner?
  3. Are other browsers going to follow suit?

Short answers: 1) Chrome offers a new plugin API that the Flash Player is now using, as are Chrome’s internal media playing facilities; 2) this feature was contingent on the new plugin infrastructure mentioned in the previous answer; 3) other browsers would require the same infrastructure support.

Longer answers follow…

Plugin History
Plugins were originally based on the Netscape Plugin API. This was developed in the early 1990s in order to support embedding PDFs into the Netscape web browser. The NPAPI does things like providing graphics contexts for drawing and input processing, and mediate network requests through the browser’s network facilities.

What NPAPI doesn’t do is handle audio. In the early-mid 1990s, audio support was not a widespread consideration in the consumer PC arena. Due to the lack of audio API support, if a plugin wanted to play audio, it had to go outside of the plugin framework.


NPAPI plugin model

There are a few downsides to this approach:

So that last item hopefully answers the question of why it has been so difficult for NPAPI-supporting browsers to implement what seems like it would be simple functionality, like implementing a per-tab audio notifier.

Plugin Future
Since Google released Chrome in an effort to facilitate advancements on the client side of the internet, they have made numerous efforts to modernize various legacy aspects of web technology. These efforts include the SPDY protocol, Native Client, WebM/WebP, and something call the Pepper Plugin API (PPAPI). This is a more modern take on the classic plugin architecture to supplant the aging NPAPI:


PPAPI plugin model

Right away, we see that the job of the plugin writer is greatly simplified. Where was this API years ago when I was writing my API jungle piece?

The Linux version of Chrome was apparently the first version that packaged the Pepper version of the Flash Player (doing so fixed an obnoxious bug in the Linux Flash Player interaction with GTK). Now, it looks like Windows and Mac have followed suit. Digging into the Chrome directory on a Windows 7 installation:

AppData\Local\Google\Chrome\Application\[version]\PepperFlash\pepflashplayer.dll

This directory exists for version 31 as well, which is still hanging around my system.

So, to re-iterate: Chrome has a new plugin API that plugins use to access the audio API. Chrome knows when the API is accessed and that allows the browser to display the audio notifier on a tab.

Other Browsers
What about other browsers? “Mozilla is not interested in or working on Pepper at this time. See the Chrome Pepper pages.”

Posted in General | 6 Comments »

Overthinking My Search Engine Problem

December 30th, 2013 by Multimedia Mike

I wrote a search engine for my Game Music Appreciation website, because the site would have been significantly less valuable without it (and I would eventually realize that the search feature is probably the most valuable part of this endeavor). I came up with a search solution that was a bit sketchy, but worked… until it didn’t. I thought of a fix but still searched for more robust and modern solutions (where ‘modern’ is defined as something that doesn’t require compiling a C program into a static CGI script and hoping that it works on a server I can’t debug on).

Finally, I realized that I was overthinking the problem– did you know that a bunch of relational database management systems (RDBMSs) support full text search (FTS)? Okay, maybe you did, but I didn’t know this.

Problem Statement
My goal is to enable users to search the metadata (title, composer, copyright, other tags) attached to various games. To do this, I want to index a series of contrived documents that describe the metadata. 2 examples of these contrived documents, interesting because both of these games have very different titles depending on region, something the search engine needs to account for:

system: Nintendo NES
game: Snoopy's Silly Sports Spectacular
author: None; copyright: 1988 Kemco; dumped by: None
additional tags: Donald Duck.nsf Donald Duck

system: Super Nintendo
game: Arcana
author: Jun Ishikawa, Hirokazu Ando; copyright: 1992 HAL Laboratory; dumped by: Datschge
additional tags: card.rsn.gamemusic Card Master Cardmaster

The index needs to map these documents to various pieces of game music and the search solution needs to efficiently search these documents and find the various game music entries that match a user’s request.

Now that I’ve been looking at it for long enough, I’m able to express the problem surprisingly succinctly. If I had understood that much originally, this probably would have been simpler.

First Solution & Breakage
My original solution was based on SWISH-E. The CGI script was a C program that statically linked the SWISH-E library into a binary that miraculously ran on my web provider. At least, it ran until it decided to stop working a month ago when I added a new feature unrelated to search. It was a very bizarre problem, the details of which would probably bore you to tears. But if you care, the details are all there in the Stack Overflow question I asked on the matter.

While no one could think of a direct answer to the problem, I eventually thought of a roundabout fix. The problem seemed to pertain to the static linking. Since I couldn’t count on the relevant SWISH-E library to be on my host’s system, I uploaded the shared library to the same directory as the CGI script and used dlopen()/dlsym() to fetch the functions I needed. It worked again, but I didn’t know for how long.

Searching For A Hosted Solution
I know that anything is possible in this day and age; while my web host is fairly limited, there are lots of solutions for things like this and you can deploy any technology you want, and for reasonable prices. I figured that there must be a hosted solution out there.

I have long wanted a compelling reason to really dive into Amazon Web Services (AWS) and this sounded like a good opportunity. After all, my script works well enough; if I could just find a simple Linux box out there where I could install the SWISH-E library and compile the CGI script, I should be good to go. AWS has a free tier and I started investigating this approach. But it seems like a rabbit hole with a lot of moving pieces necessary for such a simple task.

I had heard that AWS had something in this area. Sure enough, it’s called CloudSearch. However, I’m somewhat discouraged by the fact that it would cost me around $75 per month to run the smallest type of search instance which is at the core of the service.

Finally, I came to another platform called Heroku. It’s supposed to be super-scalable while having a free tier for hobbyists. I started investigating FTS on Heroku and found this article which recommends using the FTS capabilities of their standard hosted PostgreSQL solution. However, the free tier of Postgres hosting only allows for 10,000 rows of data. Right now, my database has about 5400 rows. I expect it to easily overflow the 10,000 limit as soon as I incorporate the C64 SID music corpus.

However, this Postgres approach planted a seed.

RDBMS Revelation
I have 2 RDBMSs available on my hosting plan– MySQL and SQLite (the former is a separate service while SQLite is built into PHP). I quickly learned that both have FTS capabilities. Since I like using SQLite so much, I elected to leverage its FTS functionality. And it’s just this simple:

CREATE VIRTUAL TABLE gamemusic_metadata_fts USING fts3
( content TEXT, game_id INT, title TEXT );

SELECT id, title FROM gamemusic_metadata_fts WHERE content MATCH "arcana";
479|Arcana

The ‘content’ column gets the metadata pseudo-documents. The SQL gets wrapped up in a little PHP so that it queries this small database and turns the result into JSON. The script is then ready as a drop-in replacement for the previous script.

Posted in General | 5 Comments »

Adding AY Files To The Game Music Website

November 30th, 2013 by Multimedia Mike

For the first time since I launched the site in the summer of last year, I finally added support for new systems for my Game Music Appreciation site: A set of chiptune music files which bear the file extension AY. These files come from games that were on the ZX Spectrum and Amstrad CPC computer systems.


ZX Spectrum   Amstrad CPC

Right now, there are over 650 ZX Spectrum games in the site while there are all of 20 Amstrad CPC games. The latter system seems a bit short-changed, but I read that a lot of Amstrad games were straight ports from the Spectrum anyway since the systems possessed assorted similarities. This might help explain the discrepancy.

Technically
The AY corpus has always been low hanging fruit due to the fact that the site already supports the format courtesy of the game-music-emu backend. The thing that blocked me was that I didn’t know much about these systems. I knew that there were 2 systems (and possibly more) that shared the same chiptune format. Apparently, these machines were big in Europe (I was only vaguely aware of them before I started this project).

Both the Spectrum and the Amstrad used Zilog Z-80 CPUs for computing and created music using a General Instruments synthesizer chip designated AY-3-8912, hence the chiptune file extension AY. This has 3 channels similar to the C64 SID chip. Additionally, there’s a fourth channel that game music emu calls “beeper” (and which Wikipedia describes as “one channel with 10 octaves”). Per my listening, it seems similar to the old PC speaker/honker. The metadata for a lot of the songs will specify either (AY) or (Beeper).

Wrangling Metadata
Large collections of AY files are easy to find; as is typical for pure chiptunes, the files are incredibly small.

As usual, the hardest part of the whole process was munging metadata. There seems to be 2 slightly different conventions for AY metadata, likely from 2 different people doing the bulk of the work and releasing the fruits of their labor into the wild. After I recognized the subtle differences between the 2 formats, it was straightforward to craft a tool to perform most of the work, leaving only a minimum of cleanup effort required afterwards.

(As an aside, I think this process is called extract – transform – load, or ETL. Sounds fancy and complicated, yet it’s technically one of the first computer programming tasks I was ever paid to perform.)

Collateral Damage
While pushing this feature, I managed to break the site’s search engine. The search solution I developed was always sketchy (involving compiling a C program as a static binary CGI script and trusting it to run on the server). I will probably need to find a better approach, preferably sooner than later.

Posted in General | 5 Comments »

« Previous Entries