Breaking Eggs And Making Omelettes

Topics On Multimedia Technology and Reverse Engineering


Archives:

Subtitling Sierra RBT Files

June 1st, 2016 by Multimedia Mike

This is part 2 of the adventure started in my Subtitling Sierra VMD Files post. After I completed the VMD subtitling, The Translator discovered a wealth of animation files in a format called RBT (this apparently stands for “Robot” but I think “Ribbit” format could be more fun). What are we going to do? We had come so far by solving the VMD subtitling problem for Phantasmagoria. It would be a shame if the effort ground to a halt due to this.

Fortunately, the folks behind the ScummVM project already figured out enough of the format to be able to decode the RBT files in Phantasmagoria.

In the end, I was successful in creating a completely standalone tool that can take a Robot file and a subtitle file and create a new Robot file with subtitles. The source code is here (subtitle-rbt.c). Here’s what the final result looks like:


Spanish refrigerator
“What’s in the refrigerator?” I should note at this juncture that I am not sure if this particular Robot file even has sound or dialogue since I was conducting these experiments on a computer with non-working audio.

The RBT Format
I have created a new MultimediaWiki page describing the Robot Animation format based on the ScummVM source code. I have not worked with a format quite like this before. These are paletted animations which consist of a sequence of independent frames that are designed to be overlaid on top of static background. Because of these characteristics, each frame encodes its own unique dimensions and origin coordinate within the frame. While the Phantasmagoria VMD files are usually 288×144 (which are usually double-sized for the benefit of a 640×400 Super VGA canvas), these frames are meant to be plotted on a game field that was roughly 576×288 (288×144 doublesized).
Read the rest of this entry »

Posted in Game Hacking | 3 Comments »

Subtitling Sierra VMD Files

May 31st, 2016 by Multimedia Mike

I was contacted by a game translation hobbyist from Spain (henceforth known as The Translator). He had set his sights on Sierra’s 7-CD Phantasmagoria. This mammoth game was driven by a lot of FMV files and animations that have speech. These require language translation in the form of video subtitling. He’s lucky that he found possibly the one person on the whole internet who has just the right combination of skill, time, and interest to pull this off. And why would I care about helping? I guess I share a certain camaraderie with game hackers. Don’t act so surprised. You know what kind of stuff I like to work on.

The FMV format used in this game is VMD, which makes an appearance in numerous Sierra titles. FFmpeg already supports decoding this format. FFmpeg also supports subtitling video. So, ideally, all that’s necessary to support this goal is to add a muxer for the VMD format which can encode raw video and audio, which the format supports. Implement video compression as extra credit.

The pipeline that I envisioned looks like this:


VMD Subtitling Process

VMD Subtitling Process


“Trivial!” I surmised. I just never learn, do I?

The Plan
So here’s my initial pitch, outlining the work I estimated that I would need to do towards the stated goal:

  1. Create a new file muxer that produces a syntactically valid VMD file with bogus video and audio data. Make sure it works with both FFmpeg’s playback system as well as the proper Phantasmagoria engine.
  2. Create a new video encoder that essentially operates in pass-through mode while correctly building a palette.
  3. Create a new basic encoder for the video frames.

A big unknown for me was exactly how subtitle handling operates in FFmpeg. Thanks to this project, I now know. I was concerned because I was pretty sure that font rendering entails anti-aliasing which bodes poorly for keeping the palette count under 256 unique colors.

Computer Science Puzzle
When pondering how to process the palette, I was excited for the opportunity to exercise actual computer science. FFmpeg converts frames from paletted frames to full RGB frames. Then it needs to convert them back to paletted frames. I had a vague recollection of solving this problem once before when I was experimenting with a new paletted video codec. I seem to recall that I did the palette conversion in a very naive manner. I just used a static 256-element array and processed each RGB pixel of the frame, seeing if the value already occurred in the table (O(n) lookup) and adding it otherwise.
Read the rest of this entry »

Posted in Game Hacking | 5 Comments »

Running Windows XP In 2016

January 1st, 2016 by Multimedia Mike

I have an interest in getting a 32-bit Windows XP machine up and running. I have a really good yet slightly dated and discarded computer that seemed like a good candidate for dedicating to this task. So the question is: Can Windows XP still be installed from scratch on a computer, activated, and used in 2016? I wasn’t quite sure since I have heard stories about how Microsoft has formally ended support for Windows XP as of the first half of 2014 and I wasn’t entirely sure what that meant.

Spoiler: It’s still possible to install and activate Windows XP as of the writing of this post. It’s also possible to download and install all the updates published up until support ended.

The Candidate Computer
This computer was assembled either in late 2008 or early 2009. It was a beast at the time.


New old Windows XP computer
Click for a larger image

It was built around the newly-released NVIDIA GTX 280 video card. The case is a Thermaltake DH-101, which is a home theater PC thing. The motherboard is an Asus P5N32-SLI Premium with a Core 2 Duo X6800 2.93 GHz CPU on board. 2 GB of RAM and a 1.5 TB hard drive are also present.

The original owner handed it off to me because their family didn’t have much use for it anymore (too many other machines in the house). Plus it was really, obnoxiously loud. The noisy culprit was the stock blue fan that came packaged with the Intel processor (seen in the photo) whining at around 65 dB. I replaced the fan and brought the noise level way down.

As for connectivity, the motherboard has dual gigabit NICs (of 2 different chipsets for some reason) and onboard wireless 802.11g. I couldn’t make the latter work and this project was taking place a significant distance from my wired network. Instead, I connected a USB 802.11ac dongle and antenna which is advertised to work in both Windows XP and Linux. It works great under Windows XP. Meanwhile, making the adapter work under Linux provided a retro-computing adventure in which I had to modify C code to make the driver work.

So, score 1 for Windows XP over Linux here.

The Simple Joy of Retro-computing
One thing you have to watch out for when you get into retro-computing is fighting the urge to rant about the good old days of computing. Most long-time computer users have a good understanding of the frustration that computers keep getting faster by orders of magnitude and yet using them somehow feels slower and slower over successive software generations.
Read the rest of this entry »

Posted in General | 15 Comments »

Things I Have Learned About Emscripten

August 31st, 2015 by Multimedia Mike

3 years ago, I released my Game Music Appreciation project, a website with a ludicrously uninspired title which allowed users a relatively frictionless method to experience a range of specialized music files related to old video games. However, the site required use of a special Chrome plugin. Ever since that initial release, my #1 most requested feature has been for a pure JavaScript version of the music player.

“Impossible!” I exclaimed. “There’s no way JS could ever run fast enough to run these CPU emulators and audio synthesizers in real time, and allow for the visualization that I demand!” Well, I’m pleased to report that I have proved me wrong. I recently quietly launched a new site with what I hope is a catchier title, meant to evoke a cloud-based retro-music-as-a-service product: Cirrus Retro. Right now, it’s basically the same as the old site, but without the wonky Chrome-specific technology.

Along the way, I’ve learned a few things about using Emscripten that I thought might be useful to share with other people who wish to embark on a similar journey. This is geared more towards someone who has a stronger low-level background (such as C/C++) vs. high-level (like JavaScript).

General Goals
Do you want to cross-compile an entire desktop application, one that relies on an extensive GUI toolkit? That might be difficult (though I believe there is a path for porting qt code directly with Emscripten). Your better wager might be to abstract out the core logic and processes of the program and then create a new web UI to access them.

Do you want to compile a game that basically just paints stuff to a 2D canvas? You’re in luck! Emscripten has a porting path for SDL. Make a version of your C/C++ software that targets SDL (generally not a tall order) and then compile that with Emscripten.

Do you just want to cross-compile some functionality that lives in a library? That’s what I’ve done with the Cirrus Retro project. For this, plan to compile the library into a JS file that exports some public functions that other, higher-level, native JS (i.e., JS written by a human and not a computer) will invoke.

Memory Levels
When porting C/C++ software to JavaScript using Emscripten, you have to think on 2 different levels. Or perhaps you need to force JavaScript into a low level C lens, especially if you want to write native JS code that will interact with Emscripten-compiled code. This often means somehow allocating chunks of memory via JS and passing them to the Emscripten-compiled functions. And you wouldn’t believe the type of gymnastics you need to execute to get native JS and Emscripten-compiled JS to cooperate.
Read the rest of this entry »

Posted in Cirrus Retro | 5 Comments »

Emscripten and Web Audio API

April 28th, 2015 by Multimedia Mike

Ha! They said it couldn’t be done! Well, to be fair, I said it couldn’t be done. Or maybe that I just didn’t have any plans to do it. But I did it– I used Emscripten to cross-compile a CPU-intensive C/C++ codebase (Game Music Emu) to JavaScript. Then I leveraged the Web Audio API to output audio and visualize the audio using an HTML5 canvas.

Want to see it in action? Here’s a demonstration. Perhaps I will be able to expand the reach of my Game Music site when I can drop the odd Native Client plugin. This JS-based player works great on Chrome, Firefox, and Safari across desktop operating systems.

But this endeavor was not without its challenges.

Programmatically Generating Audio
First, I needed to figure out the proper method for procedurally generating audio and making it available to output. Generally, there are 2 approaches for audio output:

  1. Sit in a loop and generate audio, writing it out via a blocking audio call
  2. Implement a callback that the audio system can invoke in order to generate more audio when needed

Option #1 is not a good idea for an event-driven language like JavaScript. So I hunted through the rather flexible Web Audio API for a method that allowed something like approach #2. Callbacks are everywhere, after all.

I eventually found what I was looking for with the ScriptProcessorNode. It seems to be intended to apply post-processing effects to audio streams. A program registers a callback which is passed configurable chunks of audio for processing. I subverted this by simply overwriting the input buffers with the audio generated by the Emscripten-compiled library.

The ScriptProcessorNode interface is fairly well documented and works across multiple browsers. However, it is already marked as deprecated:

Note: As of the August 29 2014 Web Audio API spec publication, this feature has been marked as deprecated, and is soon to be replaced by Audio Workers.

Despite being marked as deprecated for 8 months as of this writing, there exists no appreciable amount of documentation for the successor API, these so-called Audio Workers.

Vive la web standards!

Visualize This
The next problem was visualization. The Web Audio API provides the AnalyzerNode API for accessing both time and frequency domain data from a running audio stream (and fetching the data as both unsigned bytes or floating-point numbers, depending on what the application needs). This is a pretty neat idea. I just wish I could make the API work. The simple demos I could find worked well enough. But when I wired up a prototype to fetch and visualize the time-domain wave, all I got were center-point samples (an array of values that were all 128).

Even if the API did work, I’m not sure if it would have been that useful. Per my reading of the AnalyserNode API, it only returns data as a single channel. Why would I want that? My application supports audio with 2 channels. I want 2 channels of data for visualization.

How To Synchronize
So I rolled my own visualization solution by maintaining a circular buffer of audio when samples were being generated. Then, requestAnimationFrame() provided the rendering callbacks. The next problem was audio-visual sync. But that certainly is not unique to this situation– maintaining proper A/V sync is a perennial puzzle in real-time multimedia programming. I was able to glean enough timing information from the environment to achieve reasonable A/V sync (verify for yourself).

Pause/Resume
The next problem I encountered with the Web Audio API was pause/resume facilities, or the lack thereof. For all its bells and whistles, the API’s omission of such facilities seems most unusual, as if the design philosophy was, “Once the user starts playing audio, they will never, ever have cause to pause the audio.”

Then again, I must understand that mine is not a use case that the design committee considered and I’m subverting the API in ways the designers didn’t intend. Typical use cases for this API seem to include such workloads as:

  • Downloading, decoding, and playing back a compressed audio stream via the network, applying effects, and visualizing the result
  • Accessing microphone input, applying effects, visualizing, encoding and sending the data across the network
  • Firing sound effects in a gaming application
  • MIDI playback via JavaScript (this honestly amazes me)

What they did not seem to have in mind was what I am trying to do– synthesize audio in real time.

I implemented pause/resume in a sub-par manner: pausing has the effect of generating 0 values when the ScriptProcessorNode callback is invoked, while also canceling any animation callbacks. Thus, audio output is technically still occurring, it’s just that the audio is pure silence. It’s not a great solution because CPU is still being used.

Future Work
I have a lot more player libraries to port to this new system. But I think I have a good framework set up.

Posted in HTML5 | 3 Comments »

Dreamcast Track Sizes

February 28th, 2015 by Multimedia Mike

I’ve been playing around with Sega Dreamcast discs lately. Not playing the games on the DC discs, of course, just studying their structure. To review, the Sega Dreamcast game console used special optical discs named GD-ROMs, where the GD stands for “gigadisc”. They are capable of holding about 1 gigabyte of data.

You know what’s weird about these discs? Each one manages to actually store a gigabyte of data. Each disc has a CD portion and a GD portion. The CD portion occupies the first 45000 sectors and can be read in any standard CD drive. This area is divided between a brief data track and a brief (usually) audio track.

The GD region starts at sector 45000. Sometimes, it’s just one humongous data track that consumes the entire GD region. More often, however, the data track is split between the first track and the last track in the region and there are 1 or more audio tracks in between. But the weird thing is, the GD region is always full. I made a study of it (click for a larger, interactive graph):


Dreamcast Track Sizes

Some discs put special data or audio bonuses in the CD region for players to discover. But every disc manages to fill out the GD region. I checked up on a lot of those audio tracks that divide the GD data and they’re legitimate music tracks. So what’s the motivation? Why would the data track be split in 2 pieces like that?

I eventually realized that I probably answered this question in this blog post from 4 years ago. The read speed from the outside of an optical disc is higher than the inside of the same disc. When I inspect the outer data tracks of some of these discs, sure enough, there seem to be timing-sensitive multimedia FMV files living on the outer stretches.

One day, I’ll write a utility to take apart the split ISO-9660 filesystem offset from a weird sector.

Posted in Sega Dreamcast | 4 Comments »

Dreamcast SD Adapter and DreamShell

December 30th, 2014 by Multimedia Mike

Nope! I’m never going to let go of the Sega Dreamcast hacking. When I was playing around with Dreamcast hacking early last year, I became aware that there is such a thing as an SD card adapter for the DC that plugs into the port normally reserved for the odd DC link cable. Of course I wanted to see what I could do with it.

The primary software that leverages the DC SD adapter is called DreamShell. Working with this adapter and the software requires some skill and guesswork. Searching for these topics tends to turn up results from various forums where people are trying to cargo-cult their way to solutions. I have a strange feeling that this post might become the unofficial English-language documentation on the matter.

Use Cases
What can you do with this thing? Undoubtedly, the primary use is for backing up (ripping) the contents of GD-ROMs (the custom optical format used for the DC) and playing those backed up (ripped) copies. Presumably, users of this device leverage the latter use case more than the former, i.e., download ripped games, load them on the SD card, and launch them using DreamShell.

However, there are other uses such as multimedia playback, system exploration, BIOS reprogramming, high-level programming, and probably a few other things I haven’t figured out yet.

Delivery
I put in an order via the dc-sd.com website and in about 2 short months, the item arrived from China. This marked my third lifetime delivery from China and curiously, all 3 of the shipments have pertained to the Sega Dreamcast.


Dreamcast SD Adapter package

Click for larger image


I thought it was very interesting that this adapter came in such complete packaging. The text is all in Chinese, though the back states “Windows 98 / ME / 2000 / XP, Mac OS 9.1, LINUX2.4”. That’s what tipped me off that they must have just cannibalized some old USB SD card readers and packaging in order to create these. Closer inspection of the internals through the translucent pink case confirms this.

Usage
According to its change log, DreamShell has been around for a long time with version 1.0.0 released in February of 2004. The current version is 4.0.0 RC3. There are several downloads available:

  1. DreamShell 4.0 RC 3 CDI Image
  2. DreamShell 4.0 RC 3 + Boot Loader
  3. DreamShell 4.0 RC 3 + Core CDI image

Option #2 worked for me. It contains a CDI disc image and the DreamShell files in a directory named DS/.

Burn the CDI to a CD-R in the normal way you would burn a bootable Dreamcast disc from a CDI image. This is open-ended and left as an exercise to the reader, since there are many procedures depending on platform. On Linux, I used a small script I found once called burncdi-dc.sh.

Then, copy the contents of the DS/ folder to an SD card. As for filesystem, FAT16 and FAT32 are both known to work. The files in DS/ should land in the root of the SD card; the folder DS/ should not be in the root.

Plug the SD card into the DC SD adapter and plug the adapter in the link cable port on the back of the Dreamcast. Then, boot the disc. If it works, you will see this minor corruption of the usual Sega licensing screen:


DreamShell logo on Dreamcast startup

Then, there will be a brief white-on-black text screen that explains the booting process:
Read the rest of this entry »

Posted in Sega Dreamcast | 3 Comments »

Saying Goodbye To Old Machines

November 30th, 2014 by Multimedia Mike

I recently sent a few old machines off for recycling. Both had relevance to the early days of the FATE testing effort. As is my custom, I photographed them (poorly, of course).

First, there’s the PowerPC-based Mac Mini I procured thanks to a Craigslist ad in late 2006. I had plans to develop automated FFmpeg building and testing and was already looking ahead toward testing multiple CPU architectures. Again, this was 2006 and PowerPC wasn’t completely on the outs yet– although Apple’s MacTel transition was in full swing, the entire new generation of video game consoles was based on PowerPC.


PPC Mac Mini pieces

Click for larger image


I remember trying to find a Mac Mini PPC on Craigslist. Many were to be found, but all asked more than the price of even a new Mac Mini Intel, always because the seller was leaving all of last year’s applications and perhaps including a monitor, neither of which I needed. Fortunately, I found this bare Mac Mini. Also fortunate was the fact that it was far easier to install Linux on it than the first PowerPC machine I owned.

After FATE operation transitioned away from me, I still kept the machine in service as an edge server and automated backup machine. That is, until the hard drive failed on reboot one day. Thus, when it was finally time to recycle the computer, I felt it necessary to disassemble the machine and remove the hard drive for possible salvage and then for destruction.
Read the rest of this entry »

Posted in General | 3 Comments »

Evolution of Multimedia Fiefdoms

September 30th, 2014 by Multimedia Mike

I want to examine how multimedia fiefdoms have risen and fallen through the years.


Medieval Castle

Back in the day, the multimedia fiefdoms were built around the formats put forth by competing companies: there was Microsoft/WMV, Apple/MOV, and Real/RM as the big contenders. On2 always wanted to be a player in this arena but could never quite catch a break. A few brave contenders held the line for open source and also for the power users who desired one application that could handle everything (my original motivation for wanting to get into multimedia hacking).

The computer desktop was the battleground for internet-based media stream. Whatever happened to those days? Actually, if memory serves, Flash-based video streaming stepped on all of them.

Over the last 6-7 years, the battleground has expanded to cover mobile devices, where Flash’s impact has… lessened. During this time, multimedia technology pretty well standardized on a particular stack, namely, the MPEG (MP4/H.264/AAC) stack.

The belligerents in this war tried for years to effectively penetrate new territory, namely, the living room where the television lived. This had been slowgoing for years due to various user interface and content issues, but steadily improved.

Last April, Amazon announced their entry into the set-top box market with the Fire TV. That was when it suddenly crystallized for me that the multimedia ecosystem has radically shifted. Now, the multimedia fiefdoms revolve around access to content via streaming services.

Off the top of my head, here are some of the fiefdoms these days (fiefdoms I have experience using):

  • Netflix (subscription streaming)
  • Amazon (subscription, rental, and purchased streaming)
  • Hulu Plus (subscription streaming)
  • Apple (rental and purchased media)

I checked some results on Can I Stream.It? (which I refer to often) and found a bunch more streaming fiefdoms such as Google (both Play and YouTube, which are separate services), Sony, Xbox 360, Crackle, Redbox Instant, Vudu, Target Ticket, Epix, Sony, SnagFilms, and XFINITY StreamPix. And surely, these are probably just services available in the United States; I know other geographical regions have their own fiefdoms.

What happened?

When I got into multimedia hacking, there were all these disparate, competing ecosystems. As a consumer, I didn’t care where the media came from, I just wanted to play it. That’s what inspired me to work on open source multimedia projects. Now I realize that I have the same problem 10-15 years later: there are multiple competing ecosystems. I might subscribe to fiefdoms X and Y, but am frustrated to learn that something I’d like to watch is only available through fiefdom Z. Very few of these fiefdoms can be penetrated using open source technology.

I’m not really sure about the point about this whole post. Multimedia technology seems really standardized these days. But that’s probably just my perspective because I have spent way too long focusing on a few areas of multimedia technology such as audio and video coding. It’s interesting that all these services probably leverage the same limited number of codecs. Their differentiation comes from the catalog of content that each is able to license for streaming. There are different problems to solve in the multimedia arena now.

Posted in General | 1 Comment »

Visualizing Call Graphs Using Gephi

August 31st, 2014 by Multimedia Mike

When I was at university studying computer science, I took a basic chemistry course. During an accompanying lab, the teaching assistant chatted me up and asked about my major. He then said, “Computer science? Well, that’s just typing stuff, right?”

My impulsive retort: “Sure, and chemistry is just about mixing together liquids and coming up with different colored liquids, as seen on the cover of my high school chemistry textbook, right?”


Chemistry fun

In fact, pure computer science has precious little to do with typing (as is joked in CS circles, computer science is about computers in the same way that astronomy is about telescopes). However, people who study computer science often pursue careers as programmers, or to put it in fancier professional language, software engineers.

So, what’s a software engineer’s job? Isn’t it just typing? That’s where I’ve been going with this overly long setup. After thinking about it for long enough, I like to say that a software engineer’s trade is managing complexity.

A few years ago, I discovered Gephi, an open source tool for graph and data visualization. It looked neat but I didn’t have much use for it at the time. Recently, however, I was trying to get a better handle on a large codebase. I.e., I was trying to manage the project’s complexity. And then I thought of Gephi again.

Prior Work
One way to get a grip on a large C codebase is to instrument it for profiling and extract details from the profiler. On Linux systems, this means compiling and linking the code using the -pg flag. After running the executable, there will be a gmon.out file which is post-processed using the gprof command.

GNU software development tools have a reputation for being rather powerful and flexible, but also extremely raw. This first hit home when I was learning how to use the GNU tool for code coverage — gcov — and the way it outputs very raw data that you need to massage with other tools in order to get really useful intelligence.

And so it is with gprof output. The output gives you a list of functions sorted by the amount of processing time spent in each. Then it gives you a flattened call tree. This is arranged as “during the profiled executions, function c was called by functions a and b and called functions d, e, and f; function d was called by function c and called functions g and h”.

How can this call tree data be represented in a more instructive manner that is easier to navigate? My first impulse (and I don’t think I’m alone in this) is to convert the gprof call tree into a representation suitable for interpretation by Graphviz. Unfortunately, doing so tends to generate some enormous and unwieldy static images.

Feeding gprof Data To Gephi
I learned of Gephi a few years ago and recalled it when I developed an interest in gaining better perspective on a large base of alien C code. To understand what this codebase is doing for a particular use case, instrument it with gprof, gather execution data, and then study the code paths.

How could I feed the gprof data into Gephi? Gephi supports numerous graphing formats including an XML-based format named GEXF.

Thus, the challenge becomes converting gprof output to GEXF.

Which I did.

Demonstration
I have been absent from FFmpeg development for a long time, which is a pity because a lot of interesting development has occurred over the last 2-3 years after a troubling period of stagnation. I know that 2 big video codec developments have been HEVC (next in the line of MPEG codecs) and VP9 (heir to VP8’s throne). FFmpeg implements them both now.

I decided I wanted to study the code flow of VP9. So I got the latest FFmpeg code from git and built it using the options "--extra-cflags=-pg --extra-ldflags=-pg". Annoyingly, I also needed to specify "--disable-asm" because gcc complains of some register allocation snafus when compiling inline ASM in profiling mode (and this is on x86_64). No matter; ASM isn’t necessary for understanding overall code flow.

After compiling, the binary ‘ffmpeg_g’ will have symbols and be instrumented for profiling. I grabbed a sample from this VP9 test vector set and went to work.

./ffmpeg_g -i vp90-2-00-quantizer-00.webm -f null /dev/null
gprof ./ffmpeg_g > vp9decode.txt
convert-gprof-to-gexf.py vp9decode.txt > ~/bigdisk/vp9decode.gexf

Gephi loads vp9decode.gexf with no problem. Using Gephi, however, can be a bit challenging if one is not versed in any data exploration jargon. I recommend this Gephi getting starting guide in slide deck form. Here’s what the default graph looks like:


gprof-ffmpeg-gephi-1

Not very pretty or helpful. BTW, that beefy arrow running from mid-top to lower-right is the call from decode_coeffs_b -> iwht_iwht_4x4_add_c. There were 18774 from the former to the latter in this execution. Right now, the edge thicknesses correlate to number of calls between the nodes, which I’m not sure is the best representation.

Read the rest of this entry »

Posted in General | 2 Comments »

« Previous Entries