Stupid Hackathon: Is it ok to dance?

For this weekend’s Stupid Hackathon, I made Ok2Dance.

Ok2Dance is kind of like Shazam except instead of telling you what music is playing, it analyzes the sound and tells you whether it’s ok for dancing.

Ok2Dance accesses your computer’s microphone to record 20 seconds of audio. It saves the recording (temporarily) as a .wav file. Then it sends that file to the Echo Nest for an in-depth musical analysis. From there, Ok2D will ping Echo Nest continuously for the results of the analysis. Once the analysis is ready, Ok2D converts Echo Nest’s floating point number for danceability to a simpler Yes or No answer. Then it loads a corresponding animated GIF. Then it shows the result, accompanied by the animated GIF.

At this point, the song might be over. But at least you’ll know whether it would have been ok to have danced to that.

Why not try Ok2Dance next time you’re at a club or party and wondering if it’s ok to start dancing? Simply log onto your desktop computer, and point its web browser to jasonsigal.cc/ok2dance/. (**NOTE: Mobile devices do not support getUserMedia, so you must bring a desktop computer!!). If you’re looking for test audio, disco usually is danceable, but the sound of staring blankly into a computer screen usually is not.

You can record anything you want, as many times as you want. And don’t worry about copyright or privacy, because the files are not stored on my server. They’re deleted immediately after the danceability analysis is complete.

Source Code on Github

Kandinskify: Kandinsky-inspired generative music visualizations

At the Music Visualization Hackathon, I collaborated with Michelle Chandra, Pam Liou and Ziv Schneider to create Kandinskify. You can check out the work in progress here.

Kandinskify prototype

For inspiration, we looked to the history of music visualization. Ziv brought up Wassily Kandinsky, the Russian painter and art theorist.

kandinsky
Kandinsky was fascinated by the relationship between music and color. He even developed a code for representing music through color: Kandinsky's Music/Color Theory We set about adapting Kandinsky’s code into a generative visualization. Kandinskify analyzes musical attributes of a song as it plays to create a digital painting in the style of Kandinsky.

Kandinsky’s musical attributes, like “Melody seems the dawn of another world,” are not easy to decipher with the human ear, let alone with a computer program. But we used the Echo Nest Track Analysis to figure out some useful attributes for a song, like ‘acousticness’ and ‘energy.’ We can use this information to determine the color palette of a song. The Echo Nest also offers a timestamped array of every beat in a song, which we used to determine when shapes should be drawn.

Kandinsky’s instrumentation like “Middle range church bell” can be hard to find in contemporary music. But with digital audio files, we are able to analyze the amplitude and frequency spectrum of a song as it plays, and it is possible to use this information to isolate certain instruments. We started out working with Zo0o0o0p!!! by Kidkanevil & Oddisee, one of my favorites from the Free Music Archive (though it may not have been Kandinsky’s choice). We were able to isolate the frequency ranges of certain instruments and map them to shapes. For example, the bell makes a spike at 7.5kHz and we mapped that to the oval shape. In the long run, we’d like to let users upload any song.

We used p5.js to create visuals on an HTML5 canvas. I’m developing an audio library for p5 as part of Google Summer of Code, so this was a fun opportunity to map audio to visuals with p5Sound.

The visual composition spirals out from the center in a radial pattern, inspired by this clip of Kandinsky drawing.


We’re planning to allow users to upload any song (or sound). We’re developing different color and shape palettes that will change based on the musical /sonic attributes. Kandinskify will analyze the music to create a unique generative visualization that will be different for every song. We’ll update the source code on github and demo here.

CCRex // Music Hack Day

For Music Hack Day NYC, I joined forces with @KrishnaDrum to make ccRex. Upload a song and ccRex fetches Creative Commons music to match. It was mostly written with PHP, using what little I’ve learned in CommLab Web. I should have used JavaScript to add feedback and maybe even an animation of Rex the dog hopping around the screen while the playlist loads. Anyway, check it out here, be kind to my servers, and I’m looking forward to the next hack day.

Screen Shot 2013-10-22 at 1.44.26 AM