New recording, and new video! I’ve put together, again mostly automatically, a rough clone of the NIN single “The Hand That Feeds” using, again for reasons that mostly escape even me, random sitcom audio. Samples from Frasier used to rebuild NIN tracks never sounded better! Which is not saying much!
Here’s the music track in isolation (or download the mp3 here):
But, a little background on what’s going on!
So it’s been interesting seeing reactions to The Seattleward Spiral: some folks love the concept and hate the execution, some folks like the raw weird noise of the thing, and a lot of people have opinions on how it might work better. And certainly there are a lot of different things that could in theory be done to make it work better — if I had known the project was going to get so much attention I would have taken it a little more seriously in the first place. Heh.
But, so! How to do it better? There are a lot of possibilities; unfortunately, many of them involve rewriting portions of afromb.py, which I’m frankly not ready to do — I need get more comfortable with Python, and much more familiar with what Remix can and can’t do, before I can really get into that territory. (If you missed the previous post: afromb.py is a Python script that uses the Echo Nest Remix API, a music analysis and manipulation library, to rebuild a song (song “a”) from the pieces of another song (song “b”) by slicing both up into tiny bits and then trying to match those bits together heuristically. To put it simply. It’s the clever bit of guts at the center of these experiments. I did not write it, I’m just enjoying using it and trying to learn more about how the whole thing works.)
One of the common suggestions that is practical at this point is to, instead of running afromb.py against an entire mix, run it against each individual solo track in a mix — guitar, bass, vocals, drums, etc. So, the guitar track gets imitated as best as possible by foreign audio; the bass does too, separately; and the vox; and the drums; and then all of that gets mixed together again at the end. The notion is that the sum of the parts may be a lot more listenable than a single pass would be.
And I’m happy to say that, having tried that out and produced the media below, it does improve things somewhat. There’s a steady beat — thanks largely to being able to rebuild percussion tracks in isolation — and the overall dynamics of the song are a lot better too (it gets loud, it gets quieter, there’s notable section shifts). The track is still, note, not something that really sounds in a casual sense like The Hand That Feeds, or necessarily something you’d put on at a party; the resemblance is still primarily rhythmic, and while the overall feel is more musical than the tracks on Seattleward there still no real harmonic or melodic correspondence between the original and new creation. Solving that problem is for another day.
Building the track
Part of the challenge with this method (aside from it being a lot more time-consuming in general to produce a track) is getting my hands on those isolated instrumental and vocal tracks in the first place. Hence the return to Nine Inch Nails as source audio; they make available stems at remix.nin.com, which is honestly a fantastic move that I’d love to see more bands make. I grabbed the Garageband session for The Hand That Feeds (no Downward Spiral tracks available, and I’m out of date on NIN since like 1996, but I remember hearing this single when it came out), bounced out each of the seventeen individual tracks that make up the mix, and proceeded to throw the same Frasier clips at each of those as I did at the mixed tracks on Seattleward, using afromb.py to rebuild them.
Then I took those newly generated frasierplexes and imported them into the Garageband session alongside their respective source tracks from Trent and company, and made a rough mix of the new audio to try and generally mirror the sonic profile of the original. I threw a little distortion on the tracks that were replicating guitar and synth and bass, compressed things fairly aggressively, applied some panning to give it a little more sense of space and separation (if Frasier is saying four different things on four different tracks I want to at least give that some stereo spread to keep it from becoming totally mushy), put a little reverb and echo on a couple tracks, and, bam: The Crane That Feeds.
Producing the video was a cinch; I just used the vafromb.py script with the final mix as my a file and a random clip of Frasier singing on a telethon for the b file. Hence the weirdly camera-centric music video.