Evolution of Multimedia Fiefdoms

I want to examine how multimedia fiefdoms have risen and fallen through the years.


Medieval Castle

Back in the day, the multimedia fiefdoms were built around the formats put forth by competing companies: there was Microsoft/WMV, Apple/MOV, and Real/RM as the big contenders. On2 always wanted to be a player in this arena but could never quite catch a break. A few brave contenders held the line for open source and also for the power users who desired one application that could handle everything (my original motivation for wanting to get into multimedia hacking).

The computer desktop was the battleground for internet-based media stream. Whatever happened to those days? Actually, if memory serves, Flash-based video streaming stepped on all of them.

Over the last 6-7 years, the battleground has expanded to cover mobile devices, where Flash’s impact has… lessened. During this time, multimedia technology pretty well standardized on a particular stack, namely, the MPEG (MP4/H.264/AAC) stack.

The belligerents in this war tried for years to effectively penetrate new territory, namely, the living room where the television lived. This had been slowgoing for years due to various user interface and content issues, but steadily improved.

Last April, Amazon announced their entry into the set-top box market with the Fire TV. That was when it suddenly crystallized for me that the multimedia ecosystem has radically shifted. Now, the multimedia fiefdoms revolve around access to content via streaming services.

Off the top of my head, here are some of the fiefdoms these days (fiefdoms I have experience using):

  • Netflix (subscription streaming)
  • Amazon (subscription, rental, and purchased streaming)
  • Hulu Plus (subscription streaming)
  • Apple (rental and purchased media)

I checked some results on Can I Stream.It? (which I refer to often) and found a bunch more streaming fiefdoms such as Google (both Play and YouTube, which are separate services), Sony, Xbox 360, Crackle, Redbox Instant, Vudu, Target Ticket, Epix, Sony, SnagFilms, and XFINITY StreamPix. And surely, these are probably just services available in the United States; I know other geographical regions have their own fiefdoms.

What happened?

When I got into multimedia hacking, there were all these disparate, competing ecosystems. As a consumer, I didn’t care where the media came from, I just wanted to play it. That’s what inspired me to work on open source multimedia projects. Now I realize that I have the same problem 10-15 years later: there are multiple competing ecosystems. I might subscribe to fiefdoms X and Y, but am frustrated to learn that something I’d like to watch is only available through fiefdom Z. Very few of these fiefdoms can be penetrated using open source technology.

I’m not really sure about the point about this whole post. Multimedia technology seems really standardized these days. But that’s probably just my perspective because I have spent way too long focusing on a few areas of multimedia technology such as audio and video coding. It’s interesting that all these services probably leverage the same limited number of codecs. Their differentiation comes from the catalog of content that each is able to license for streaming. There are different problems to solve in the multimedia arena now.

Visualizing Call Graphs Using Gephi

When I was at university studying computer science, I took a basic chemistry course. During an accompanying lab, the teaching assistant chatted me up and asked about my major. He then said, “Computer science? Well, that’s just typing stuff, right?”

My impulsive retort: “Sure, and chemistry is just about mixing together liquids and coming up with different colored liquids, as seen on the cover of my high school chemistry textbook, right?”


Chemistry fun

In fact, pure computer science has precious little to do with typing (as is joked in CS circles, computer science is about computers in the same way that astronomy is about telescopes). However, people who study computer science often pursue careers as programmers, or to put it in fancier professional language, software engineers.

So, what’s a software engineer’s job? Isn’t it just typing? That’s where I’ve been going with this overly long setup. After thinking about it for long enough, I like to say that a software engineer’s trade is managing complexity.

A few years ago, I discovered Gephi, an open source tool for graph and data visualization. It looked neat but I didn’t have much use for it at the time. Recently, however, I was trying to get a better handle on a large codebase. I.e., I was trying to manage the project’s complexity. And then I thought of Gephi again.

Prior Work
One way to get a grip on a large C codebase is to instrument it for profiling and extract details from the profiler. On Linux systems, this means compiling and linking the code using the -pg flag. After running the executable, there will be a gmon.out file which is post-processed using the gprof command.

GNU software development tools have a reputation for being rather powerful and flexible, but also extremely raw. This first hit home when I was learning how to use the GNU tool for code coverage — gcov — and the way it outputs very raw data that you need to massage with other tools in order to get really useful intelligence.

And so it is with gprof output. The output gives you a list of functions sorted by the amount of processing time spent in each. Then it gives you a flattened call tree. This is arranged as “during the profiled executions, function c was called by functions a and b and called functions d, e, and f; function d was called by function c and called functions g and h”.

How can this call tree data be represented in a more instructive manner that is easier to navigate? My first impulse (and I don’t think I’m alone in this) is to convert the gprof call tree into a representation suitable for interpretation by Graphviz. Unfortunately, doing so tends to generate some enormous and unwieldy static images.

Feeding gprof Data To Gephi
I learned of Gephi a few years ago and recalled it when I developed an interest in gaining better perspective on a large base of alien C code. To understand what this codebase is doing for a particular use case, instrument it with gprof, gather execution data, and then study the code paths.

How could I feed the gprof data into Gephi? Gephi supports numerous graphing formats including an XML-based format named GEXF.

Thus, the challenge becomes converting gprof output to GEXF.

Which I did.

Demonstration
I have been absent from FFmpeg development for a long time, which is a pity because a lot of interesting development has occurred over the last 2-3 years after a troubling period of stagnation. I know that 2 big video codec developments have been HEVC (next in the line of MPEG codecs) and VP9 (heir to VP8’s throne). FFmpeg implements them both now.

I decided I wanted to study the code flow of VP9. So I got the latest FFmpeg code from git and built it using the options "--extra-cflags=-pg --extra-ldflags=-pg". Annoyingly, I also needed to specify "--disable-asm" because gcc complains of some register allocation snafus when compiling inline ASM in profiling mode (and this is on x86_64). No matter; ASM isn’t necessary for understanding overall code flow.

After compiling, the binary ‘ffmpeg_g’ will have symbols and be instrumented for profiling. I grabbed a sample from this VP9 test vector set and went to work.

./ffmpeg_g -i vp90-2-00-quantizer-00.webm -f null /dev/null
gprof ./ffmpeg_g > vp9decode.txt
convert-gprof-to-gexf.py vp9decode.txt > ~/bigdisk/vp9decode.gexf

Gephi loads vp9decode.gexf with no problem. Using Gephi, however, can be a bit challenging if one is not versed in any data exploration jargon. I recommend this Gephi getting starting guide in slide deck form. Here’s what the default graph looks like:


gprof-ffmpeg-gephi-1

Not very pretty or helpful. BTW, that beefy arrow running from mid-top to lower-right is the call from decode_coeffs_b -> iwht_iwht_4x4_add_c. There were 18774 from the former to the latter in this execution. Right now, the edge thicknesses correlate to number of calls between the nodes, which I’m not sure is the best representation.

Continue reading

Vedanti and Max Sound vs. Google

Vedanti Systems Limited (VSL) and Max Sound Coporation filed a lawsuit against Google recently. Ordinarily, I wouldn’t care about corporate legal battles. However, this one interests me because it’s multimedia-related. I’m curious to know how coding technology patents might hold up in a real court case.

Here’s the most entertaining complaint in the lawsuit:

Despite Google’s well-publicized Code of Conduct — “Don’t be Evil” — which it explains is “about doing the right thing,” “following the law,” and “acting honorably,” Google, in fact, has an established pattern of conduct which is the exact opposite of its claimed piety.

I wonder if this is the first known case in which Google has been sued over its long-obsoleted “Don’t be evil” mantra?

Researching The Plaintiffs
Continue reading

Server Move For multimedia.cx

I made a big change to multimedia.cx last week: I moved hosting from a shared web hosting plan that I had been using for 10 years to a dedicated virtual private server (VPS). In short, I now have no one to blame but myself for any server problems I experience from here on out.

The tipping point occurred a few months ago when my game music search engine kept breaking regardless of what technology I was using. First, I had an admittedly odd C-based CGI solution which broke due to mysterious binary compatibility issues, the sort that are bound to occur when trying to make a Linux binary run on heterogeneous distributions. The second solution was an SQLite-based solution. Like the first solution, this worked great until it didn’t work anymore. Something else mysteriously broke vis-à-vis PHP and SQLite on my server. I started investigating a MySQL-based full text search solution but couldn’t make it work, and decided that I shouldn’t have to either.

Ironically, just before I finished this entire move operation, I noticed that my SQLite-based FTS solution was working again on the old shared host. I’m not sure when that problem went away. No matter, I had already thrown the switch.

How Hard Could It Be?
We all have thresholds for the type of chores we’re willing to put up with and which we’d rather pay someone else to perform. For the past 10 years, I felt that administering a website’s underlying software is something that I would rather pay someone else to worry about. To be fair, 10 years ago, I don’t think VPSs were a thing, or at least a viable thing in the consumer space, and I wouldn’t have been competent enough to properly administer one. Though I would have been a full-time Linux user for 5 years at that point, I was still the type to build all of my own packages from source (I may have still been running Linux From Scratch 10 years ago) which might not be the most tractable solution for server stability.

These days, VPSs are a much more affordable option (easily competitive with shared web hosting). I also realized I know exactly how to install and configure all the software that runs the main components of the various multimedia.cx sites, having done it on local setups just to ensure that my automated backups would actually be useful in the event of catastrophe.

All I needed was the will to do it.

The Switchover Process
Here’s the rough plan:

  • Investigate options for both VPS providers and mail hosts– I might be willing to run a web server but NOT a mail server
  • Start plotting several months in advance of my yearly shared hosting renewal date
  • Screw around for several months, playing video games and generally finding reasons to put off the move
  • Panic when realizing there are only a few days left before the yearly renewal comes due

So that’s the planning phase. BTW, I chose Digital Ocean for VPS and Zoho for email hosting. Here’s the execution phase I did last week:

  • Register with Digital Ocean and set up DNS entries to point to the old shared host for the time being
  • Once the D-O DNS servers respond correctly using a manual ‘dig’ command, use their servers as the authoritative ones for multimedia.cx
  • Create a new Droplet (D-O VPS), install all the right software, move the databases, upload the files; and exhaustively document each step, gotcha, and pitfall; treat a VPS as necessarily disposable and have an eye towards iterating the process with a new VPS
  • Use /etc/hosts on a local machine to point DNS to the new server and verify that each site is working correctly
  • After everything looks all right, update the DNS records to point to the new server

Finally, flip the switch on the MX record by pointing it to the new email provider.

Improvements and Problems
Hosting on Digital Ocean is quite amazing so far. Maybe it’s the SSDs. Whatever it is, all the sites are performing far better than on the old shared web host. People who edit the MultimediaWiki report that changes get saved in less than the 10 or so seconds required on the old server.

Again, all problems are now my problems. A sore spot with the shared web host was general poor performance. The hosting company would sometimes complain that my sites were using too much CPU. I would have loved to try to optimize things. However, the cPanel interface found on many shared hosts don’t give you a great deal of data for debugging performance problems. However, same sites, same software, same load on the VPS is considerably more performant.

Problem: I’ve already had the MySQL database die due to a spike in usage. I had to manually restart it. I was considering a cron-based solution to check if the server is running and restart it if not. In response to my analysis that my databases are mostly read and not often modified, so db crashes shouldn’t be too disastrous, a friend helpfully reminded me that, “You would not make a good sysadmin with attitudes like ‘an occasional crash is okay’.”

To this end, I am planning to migrate the database server to a separate VPS. This is a strategy that even Digital Ocean recommends. I’m hoping that the MySQL server isn’t subject to such memory spikes, but I’ll continue to monitor it after I set it up.

Overall, the server continues to get modest amounts of traffic. I predict it will remain that way unless Dark Shikari resurrects the x264dev blog. The biggest spike that multimedia.cx ever saw was when Steve Jobs linked to this WebM post.

Dropped Sites
There are a bunch of subdomains I dropped because I hadn’t done anything with them for years and I doubt anyone will notice they’re gone. One notable section that I decided to drop is the samples.mplayerhq.hu archive. It will live on, but it will be hosted by samples.ffmpeg.org, which had a full mirror anyway. The lower-end VPS instances don’t have the 53 GB necessary.

Going Forward
Here’s to another 10 years of multimedia.cx, even if multimedia isn’t as exciting as it was 10 years ago (personal opinion; I’ll have another post on this later). But at least I can get working on some other projects now that this is done. For the past 4 months or so, whenever I think of doing some other project, I always remembered that this server move took priority over everything else.