Zack and I have a similar background. We both wanted to grow up to be Hendrix, we both were into extreme sports, and we both had early programming experience. We met at Berklee College of Music, as interns. Feeling the limitations of being in bands, and also the limitations of Ableton, we both realized, to push the limits of extreme music, we needed to make it with code. Together we formed a band / hackathon team, Dadabots.Our project was "Destroying Soundcloud with Music Remix Bots"
This bot stretches overlaps sections of songs, smoothing them out into pure ambience. Here it blurs out the song art too. Building extreme time stretch is probably the best way to fast track yourself through an understanding of signal processing. It makes you realize the fundamental music theory is: sines, transients, and noise.
Autochiptune was our most popular bot. It splits a song into harmony and percussion. In a spectrogram, harmony is horizontal lines, and percussion is the vertical lines, so you can split those out. Percussion is bitcrushed. Harmony is reconstructed by binning the spectrogram into pitches, and resynthesizing them with NES-like triangle and squarewaves. This bot pixelates the cover art. The original art I intended to represent Mammon, the biblical demon of material wealth.
This bot is called Backwards Weave. Here it is remixing The Universal Declaration of Awesome. This is something I wrote which captures how I was feeling about life during a period where I was living in New Zealand hanging out in the philosophy department at Canterbury. This mindset led to the origin of Dadabots soon after. The bot takes this song and jumbles up the pieces in the same way. Each section is backwards and woven. You can't understand the words anymore. But the arc of the song is still loosely there.
Bonafide Slideglide Ride was the most chaotic of the bots. It takes random sections and pitch-bombs them into oblivion. This particular track is special because its ancestry goes several remixes deep. Mashups are tracks that combine multiple tracks together in some novel way. Smashups take this to the extreme. This was smashed together into 1 minute, then this bot took the audio further and corrupted it into chaos, until it had no resemblance to the originals. That's one of the things I love about audio, that you can start with familiar material and corrupt into something unrecognizable.
Chopshopshockshack uses chance operations and dissociated arrays to chop-up and reorder the guts out of songs. We were listening to a lot of glitchcore like Vaetxh and trying to figure out how the hell did he make this music. Was it algorithms? Could we make those algorithms? Chopshopshockshack was especially built for his kind of music, and it really shines when it remixes it.
If I could make Pizzafire I could do anything with neural nets.
In 2015 there was no open source toolkit for generating animated GIFs with neural nets on distributed computing so I wanted to release the first. In my head I imagined an animated gif of fire in the style of pizza. And this was my first goal. To make Pizzafire.Neural style took two inputs: a composition image and a style image. It would use a gradient descent optimizer to create the third image that both (1) closely matched the composition image, and (2) matches the statistics of the style image with some trickery that removes spatial information. I'd run it for 500 iterations. By today's standards this was hardcore. GANs generate in a single pass. Home computers can't run this, you needed to rent a remote server with a GPU in the cloud for $0.65/hour. A single image took several minutes to generate. To make animation in any reasonable time, you needed to spin up several cloud computers at once, each rendering frames. I had to write a script that spins up several servers in the cloud with GPUs, renders the frames, then shuts them down so it doesn't run up your bill. This is Pizzafire. It's the name of the script and the first gif made with it. It's my first project with neural networks.