The man that has no music in himself
Nor is not moved with concord of sweet sounds
Is fit for treasons, stratagems and spoils;
the motions of his spirit are as dull as night,
And affections dark as Erebus:
Let no such man be trusted.
William Shakespeare, The Merchant of Venice
It’s got to be one of my favourite quotes; it keeps coming back to me, even though I have no recollection of memorising it while at school. Powerful words. Let no such man be trusted indeed.
My epiphany about Four Pillars is very deeply rooted in listening to, seeking to understand and trying to learn about music. There’s something about music, sufficiently distinct and separate from commerce as it were, yet deeply intertwined in our daily lives. A part of me believes that there isn’t a blogger alive that does not like music; Erebus and blogging don’t go.
We can learn a lot about syndication and streaming and “publishing” from looking at how music is played or broadcast; the way we connect to music has lessons for us in search; the process we undergo in acquiring music has much to teach us about fulfilment; and much of sharing and co-creation has roots in the world of music.
It is in music that I come across the issues and problems to do with IPR and DRM; in music that I see the attempts to control access and distribution at every point in the flow, be it chipset device or connect or format or source. It is in music that I can see many dimensions: copyright of the written and symbolic forms, and of the lyrics; performance and broadcast rights; ancillary markets to do with videos and DVDs and T-shirts and coasters and ties and what-have-you; image rights and memorabilia rights; the implications of sampling and mash-ups, of creating new from many old; shattering of old distribution models and challenging of new ones; battles royale on disk formats and DRM techniques; an incredible mishmash when it comes down to the device of choice. Is that a phone a camera a PDA a music player a computer in your pocket. Or are you just pleased to see me?
It’s Yogi Berra time. When you see a fork in the road, take it.
An aside, before I get on to my theme of music and search. Have you ever wondered about how the Seven Ages of Man manifests itself in the music space, or is it just me? When I look at where I find the music I particularly like, it seems to me that there are clear changes as I grow older.
- Age 1: Parents choose. Any colour you like as long as it’s black.
Age 2: Radio or equivalent. Experiment and learn.
Age 3: Latest Releases Singles. Savour your independence and spend your pocket money.
Age 4: Mainstream Alphabetical. You’re now one of the crowd.
Age 5: Specialist sections like Rock or Metal or Hip-Hop or Classical or Reggae or Folk. Now you need to work at looking cool.
Age 6: Easy Listening. Count your teeth and check your hair loss.
Age 7: Between the candy and the chewing gum at gas stations and convenience stores. Book that funeral parlour.
On to the meat. I was reading last week’s New Scientist (I was travelling when it came out), and found this article where Kurt Kleiner argues that it’s time for a “whole new kind of search engine“. Unfortunately the link only gets you to a stub, the full article has been DRMed out of the brownies. Sad.
His basic thesis is this: There’s an incredible amount of music out there in the digital universe, over 25 million tracks. [Just think: If you laid them all end to end you may actually surpass the queue consisting of unemployed (?) Sarbanes-Oxley consultants. Unless they return to full employment as Digital Rights consultants….] Traditional indexing methods aren’t good enough. Even tagging is not good enough. So it’s time to create a more elaborate way of describing a tune, taking into account “not just key or tempo, but dozens of different characteristics, including the timbre of the sounds, chord progressions, the individual instruments and even details about each singer’s voice”.
He mentions what MusicIP seeks to do in this space, including the MusicIP Mixer as a product, and the company’s plans to develop “software that uses information on your lifestyle, diet and tastes to come up with meal suggestions, recipes and even a shopping list of ingredients”. La la la I’m not listening.
Mr Kleiner also covers what Pandora does, reporting on what Tim Westergren, the founder, has to say: “He calls the system the Music Genome Project: Just as individuals can be identified by their different combinations of genes, so Pandora aims to distinguish any piece of music according to how it scores on this set of musical “genes”. ”
Try and read the article if you can, it’s worth it. Just to put some new stuff in your head, or to shake some old stuff up. But in the meantime….
It made me think. There’s a lot of activity in the music space already…..
First, old-style deterministic search. Exact matches only and all that jazz.
In this space, Shazam already works wonders for me, I find it liberating to point a phone at the source of music while in a car, and to have the details of the song that’s playing texted to me. Shazam’s been around in the UK for a number of years now, and it’s pretty good. [Killed off pub music quizzes though…] So someone somewhere has done some work on the technique of converting music into unique digital patterns, and then yanking exact matches out of a large database. And yes, the Pandoras of this world can improve on the model.
Next, more modern probabilistic search. No longer exact matches but analogies and parallels, relevance and ranking.
I remember hearing about MongoMusic some six or seven years ago, they were the first firm I knew of that had an answer to the “Sounds Like” problem. But before I could get to play with them, they were taken over by Microsoft and disappeared off the face of the earth. I have no idea whether they will resurface in Son-Of-Windows-Media-Centre-Meets-Vista. But they could.
Then, collaborative filtering approaches to music.
While Pandora does its bit (and I am sure there are many others) last.fm seems to have this market well in sight. People who liked this also liked. Share playlists with people like you. And all that jazz. Or classical, if you prefer. Which reminds me, Firefly were the first guys I saw with a real understanding of collaborative filtering. And they too got Redmonded. And disappeared. Another to resurface post Vista? They could. Firefly founders seem to have re-established themselves in Skype and in Lovefilm, amongst others, so they’re obviously still in the same space.
We can then move to better visualisation techniques involving music.
The best I’ve seen so far is what was called MusicPlasma, now LivePlasma. Again, there are many others, all I am doing is declaring the one I am most familiar with. Here we have some sort of almost-fractal images to depict artists and groups and genres, a blueprint for finding “neighbours” of musicians you like.
And just in case you need a WayBack Machine for the rich journalism that underpins all this, you even have labours of love like RocksBackPages; again, I’m sure there are others.
The examples I’ve chosen are illustrative and no more than that. The list is neither comprehensive nor necessarily accurate. And I’ve avoided the simple established ways of doing things like iTunes. Which is great. For now.
What I was seeking to do is to prove a point.
Modern search will consist of a number of things: exact matches when called for; Sounds-Like when appropriate; collaboratively-filtered when appropriate; playlist-traded when possible, on community or friend recommendation; working off graphical visualisations when appropriate, a GoogleEarth-meets-LivePlasma approach; driven by rich history in text where relevant, a Rolling Stone meets RocksBackPages approach. All surrounded by these places we call the internet, with a host of little markets like the ones described above, and a host of little communities ranging from eBay thru Amazon thru technorati to YouTube and Bebo.
That’s what all search will become, when Generation M rule. And our role, from an enterprise-meets-technology perspective, is to pave the way for them. By doing the right thing with operating systems and platforms and infrastructure (keeping them vendor-agnostic and affordably priced), with devices and connect mechanisms (keeping them diverse and versatile and completely at consumer choice level), with digital information (avoiding all the dinosaur pitfalls in the IPR-meets-DRM space), with personal and collective tagging.
That’s why I try and learn from music when I want to build out the Four Pillars model. Because people are working on it now. Experimenting now. And it’s exciting.
BTW something else occurs to me. I’m glad someone didn’t patent each and every note in music, because I’m sure it’s not for want of trying. People will try and patent the strangest things. But if we had some sort of genome sequence for each and every song made, wouldn’t it be fun to watch someone trying to stop sampling and mashups? Sorry, it’s now a new music DNA segment, it’s not a copy. You live by the IPR-DRM sword, you could die by it. And the sword’s pretty blunt. Which is why we need a whole new way of compensating artists, rather than paving the cowpaths of the old regime, as Michael Hammer would say.
Like this:
Like Loading...