Where I’m At – App Idea

For Always On, Always Connected, I want to explore location-based audio and develop a mobile app around that idea.

First, I need to learn more about how people approach location + audio. What could make location-based audio interesting enough to create? To share privately, or publicly? Is there data about audio that might be interesting, or should I focus just on the audio itself?

I have a few ideas already. The first one is called Where I’m At. It’s basically a mashup of Shazam (or, more likely, the Echo Nest’s musical fingerprinting service), and FourSquare.

Shazam has a social component. But most people don’t use it. That’s because if you are shazamming something, the assumption is that you didn’t know what song was playing. You had to use Shazam to find out. That’s not a cool thing to share.

But the idea of listening to a specific song in a specific place at a specific time changes what it means to tag a song. For example, I could tag that I’m at Brad’s, and they’re playing “Freaks Come Out At Night” by Whodini, and then share that with friends. The geolocation isn’t interesting, the song isn’t that interesting, but maybe the combination is.

Tagging the song becomes an easy way to share the vibe of where I’m at. Where I’m At could be the name of the app.

What’s Already Out There?

Location-based media is already out there. It’s a major way that people use Instagram: photo/video + geolocation. The average person is more likely to share their location on Instagram than on 4sq, because it’s the emphasis on media that makes the location interesting and provides a deeper context.

There are already plenty of apps that let people share audio, and can optionally add a location. For example, one could use Soundcloud + Twitter, or video sharing apps where audio is part of the experience. I can also send private iMessages with voice, or voice mails that include my voice. But none of these were designed specifically with location-based audio in mind. If they were, what would they look like? Or sound like?

I’ve had a few interesting conversations with friends/colleagues about location-based audio, and what might inspire us to share audio.

Jacob RW introduced me to a great app called ChitChat that works with the click of a button. It lets you send voice messages to groups (it emphasizes group messaging, although the group can be composed of a single person). Its interface is very straightforward. Press a big button to record your message. Press a similar button to hear your message. Messages are ephemeral, like a radio broadcast or phone conversation. By default, they play out of the speaker, but they automatically play through the ear when you hold the phone up to your ear, it goes into “ear mode.”

I think this is a very well-designed app, developed by Ideo as a sort of experiment. I wonder what users they are designing for, and what their process has been like? Using ChitChat requires two users to come to an agreement that they are going to communicate in a slightly different way. ChitChat affords a fragmented conversation style where you don’t hear the other person’s reaction. It’s basically just voicemail, reimagined for the iPhone, for people who want to go “straight to voicemail” rather than letting the phone ring. And I think that most people are too self-conscious to send voice messages in public because our culture has become accustomed to text messages. Also, the fact that the phone rang and the other person didn’t pick up means they might not have wanted to talk to you, so by leaving a voice mail, you’re putting yourself out there in a way that feels more vulnerable than a text. During the first class meeting, it seemed like barely anybody uses their phone to actually have voice conversations with people. But maybe this can change people’s perceptions towards communicating with our voices, because sound can be so much more expressive than text…even with emojis.

I really love ChitChat’s “Ear Mode.” What if every time you hold your phone to your ear, you hear something interesting? It could be a stream of audio created, or curated, by your friends, or people you follow. I definitely want to play with this sensor. What if you could, at any given moment, hold the phone up to you ear and hear songs that people have tagged nearby? When you listen with headphones, the songs might play with binaural spatialization so that if you hear something you like, you can walk towards that location.

Abe tipped me off to Shoudio, which creates a global catalog of audio. But the app does not work on my phone. Apparently there was a NYTimes article about it once. Abe says that it doesn’t offer a way to sort by time, and time is a very important filter for a map of location-based audio. What is audio without time, anyway?

I’m also inspired by Abe’s work creating heatmaps of sonic energy at specific locations. What if I could see that the bar I’m looking up on Yelp is really quiet right now? Or look at the trends over time? And filter those trends by different attributes—like loudness, danceability, and other features. At what point does the data about sound become more interesting than the sound itself, and what might inspire people to share this data at all, let alone publicly?

For now, here is a rough proto.io prototype.

Leave a Reply

Your email address will not be published. Required fields are marked *