Thinking about waste

I am beholden to TS Holen for the wonderful photograph above, which he calls Ready-made Waste

To repeat what I said yesterday, as most of you probably know, I was born and brought up in Calcutta. A busy, vibrant city inhabited by millions of people. Who create a lot of waste.

While I lived there, I was fascinated by how this waste fed an entire human and economic ecosystem, the Indian and modern equivalent of the waste-pickers, scavengers, and rag-and-bone men. This ecosystem is not unique to Calcutta or even to my lifetime; Steven Johnson does a wonderful job of describing the way all this happened in Dickensian London in his book, The Ghost Map; if you haven’t read it, get yourself a copy today, it’s well worth a read. In fact all of Steven’s books are worth a read. Really.

My thanks to Rajib Singha for his composition above, Romancing the Raj: dung cakes drying on a wall in Bagbazar while a tram approaches

When I looked at waste in this context, one of the things that excited and astounded me was the vibrancy and sheer sustainability of the ecosystem around waste, as evinced by the way cowdung is mixed with straw, dried on walls and then used as cheap fuel in many parts of the world. Growing up amidst such practices taught me something: I learnt to respect waste and to recognise that people had livelihoods deeply intertwined with waste. Last year, I had the opportunity to walk around parts of Calcutta late one night, and experienced both joy as well as shock as I saw the ecosystem in action.

Over the years I’ve carried this learning into somewhat different contexts, particularly when it comes to project management and delivery. You see, I felt it was reasonable to consider all inefficiency as waste. As a consequence, when I observed an inefficient practice at work, I tried to identify the ecosystem participants for that waste, the people whose livelihoods depend on that waste. Because they were the ones most likely to push back against any change in work practices and processes. All projects are fundamentally about change, and unless such immune-system agents are identified and taken into account,  project failure is likely.

This is not some deep personal insight. Software developers, especially those who use design patterns, are usually extremely competent at analysing the as-is context from the viewpoint of problems and workarounds. What problems need to be solved. What workarounds exist today. Which inefficiencies have become enshrined in work practices. The developer then sets out to identify the root causes for the workarounds, to design more appropriate responses and to plan for sensible migration paths from the workarounds.

Sometimes the workarounds are so deeply embedded that resistance is extremely high and, as a result, the temptation to fossilise the workaround into the system is immense. Which is why software developers are heard to say things like “there’s nothing as permanent as a temporary fix”.

Which brings me to the crux of this post. Once you accept that inefficiency can be considered equivalent to waste, you can walk untrodden paths. Like the waste built into ways of marketing, selling and distributing digital content, ways that carry the habits of the analogue world, ways that exist primarily to feed the mouths of the ecosystem around that waste.

Music. Advertising. Newspapers. All marketed, sold, distributed with analogue overlays on digital processes. The kind of thinking that encourages people to design region coding for DVDs. [What customer value does that generate?]

Music. Advertising. Newspapers. Industries with waste built into their historical processes. Industries with ecosystems of people built around that waste, people with mouths to feed and bills to pay.

And now we have the cloud. Which is fundamentally about a new way of doing business, seeking to eradicate the waste that permeates most enterprise data centres. Overprovisioning is not a bad thing per se, but there’s overprovisioning and then there’s what’s been happening for a few decades, whole orders of magnitude off from sensible overprovisioning.

The cloud is about eradicating waste.

Waste that feeds a massive ecosystem.

A massive ecosystem that will rise up and seek to prevent the eradication of that waste.

We’ve already seen this happen in the music business; we’ve already seen this happen in advertising; we’re seeing this happen in newspapers. And now we will see this happen in cloud.

People have built immense business models around erstwhile waste, the organisations have themselves grown immense as a result, and now they wield immense political and financial power. So they know how to arbitrage the situation and ensure that such inefficiencies are protected by law, by regulation. Which is what has been happening in copyright and intellectual property. Witness the abominations of the Digital Economy Act, of ACTA, of Hadopi.

Unlike the waste pickers and scavengers of prior centuries, the 20th and 21st century waste pickers haven’t evolved, haven’t adapted, haven’t faded gracefully away. Because they’re powerful enough to freeze progress, to insist on keeping their particular wastes in place.

But there’s one problem.

A big problem.

We can’t afford the waste any more. No longer sustainable.

Which is where I think Vendor Relationship Management (VRM) comes in. VRM represents a way through this impasse, by placing the power where it should be: with the customer. It is the customer who has the highest motivation to eradicate waste in a system; yes, tools are necessary to help identify the waste and to deal with the waste.

The r-button or the relationship button, a key concept in VRM

One way of looking at VRM tools is that they will reduce human transactional latency by concentrating on the customer and the relationship first and on the transaction only as a consequence of that.

Doc Searls, the driving force behind VRM, has been a personal friend and mentor for many years now. This post was catalysed as a result of a recent conversation with him. The way advertising works now, the way we buy and sell, the way CRM systems operate, it’s all one-way. There’s a lot of inbuilt waste, waste that can be reduced, even annihilated, by giving customers the right voice, empowerment and tools. Which is really what VRM is about.

There’s a workshop to do with all this coming up next week, to be held at the Harvard Law School. People can contact Doc at dsearls AT cyber.law.harvard.edu, or on Twitter through @dsearls.

Make any sense? Let me know what you think.

Musing about learning by doing

My thanks to Dominik Hofer for the wonderful photograph shown above

Did you ever get the chance to read Blink? In that book,  Malcolm Gladwell said something like the following:

We learn by example and by direct experience because there are real limits to the adequacy of verbal instruction.

Now this is something I’ve believed in ever since I was old enough to believe in anything. I count myself very lucky to have grown up in a time and space where curiosity was considered normal, and where you were expected to be passionate in your pursuits. Some months ago, some Swedish filmmakers asked me whether I’d share some brief views on a single big idea for 2020, as part of a larger collection of ideas. If you’re interested, you can see the 3-minute video here.

During the interview, I couldn’t help but focus on the “Maker” Generation coming into the workplace now, and how their experiences will affect education in years to come. Regular readers of this blog will know how taken I am with the zeitgeist embodied in, for example,  the more recent works of Cory Doctorow (For The Win, Makers, Little Brother); the whole ethos behind Tim O’Reilly and Dale Dougherty’s vision encapsulated in Make Magazine; the joys and challenges of digital creativity as articulated by Larry Lessig in Remix.

When I saw that the interview had been released, I tweeted it; and a long-time blog reader and twitter friend, Greg Lloyd (@roundtrip) reminded me about something he’d written riffing off something I’d written around three years ago, as part of my series on Facebook and the Enterprise.

And that got me thinking: the Maker Generation could be in for a fantastic time when it comes to learning by doing, and when it comes to being able to augment that experiental learning with observation of example.

My thanks to Fabrizio Cuscini for the wonderful photograph shown above

Why do I think that? Serendipity. A number of things are coming together:

  • Experience-capture tools are getting better, cheaper and more ubiquitous: Nowadays, with the cost of smartphones continuing to decline, with mobile connectivity apparently getting better (notwithstanding Western urban experiences of a post iPhone 3GS world), and with the cost of storage continuing to plummet, the Maker Generation is able to collect and collate experiences in ways that prior generations could not.
  • Communal tools for sharing are getting better:  In parallel with the evolution of smarter mobile devices, there has been a rash of places where sharing can take place. The Facebooks and  Twitters and YouTubes and Flickrs and SlideShares and TiddlyWikis of this world make it possible to persist experiences, share them, augment and enrich them. Publication of material tends to be community-oriented nowadays, even courseware is going more and more opensource.
  • The Maker Generation is more inclined to share: Whatever we may think about the implications for prudence and privacy, this generation is prepared to share experiences in ways no other generation was prepared to. The lifestreaming phenomenon is something that continues to gather pace; when you look at the digital social objects people upload to Facebook, and the relentless growth of that behemoth, you begin to see the sheer relational power of shared experiences. In this context, we need to remember that India and China are both cultures with a high focus on education, and on sharing as part of education.
  • The need for experience-based learning in the marketplace has never been greater: The Big Shift spoken of by authors John Hagel, John Seely Brown and Lang Davison, (and expanded on delightfully in their book The Power of Pull) at least in part focuses on the transformation from a “stocks” based worldview to a “flows” based one. The “learning organisation” that Peter Senge spoke of has never been more needed.
  • There’s an increasing focus on education worldwide, with more appetite for radical approaches: The World Economic Forum’s Global Education Initiative is an example of such activity, highlighting the gravity of the situation and seeking to mobilise real energy into solving it; it made me ashamed to realise that I am part of a world where 72 million children of schoolgoing age don’t go to school, whatever the reason. Somewhat smaller initiatives such as the School of Everything (which I chair) also seek to change some of this landscape.
  • Trust in historical command-and-control “broadcast mode” institutions has never been lower: The willingness to accept learning-by-being-told is at a remarkable low, coming as it does at a time when most traditional authority figures (parent, priest, teacher, policeman, banker, MP, judge) are better off not running popularity contests.

A change is gonna come.

On the internet, sometimes people *do* know you’re a dog

Yesterday I wrote about the democratisation of production, distribution and consumption of digital information (I still find it extremely hard to use the word “content”, it makes me anything but content). The conclusion I was trying to get to was this: at the rate information was being produced, it would not be possible to curate it without asking for the help of the people producing the stuff in the first place.

In the post, I suggested that there were at least six different preferred outcomes of digital curation: authenticity, accessibility, veracity, relevance, consume-ability and produce-ability. In posts to follow I will try and unpack each of these preferred outcomes, extending what I wrote yesterday. Your comments will help me do that. I appreciate the time you’ve already spent in reading and commenting thus far.

Today, I want to look at just one aspect of the digital information production/distribution/consumption paradigm, that of “bundling”.

I’m sure there are many sophisticated marketing and management and strategy terms and explanations for the technique; but since I haven’t been to those classes I will try and explain the technique the way I see it. Bundling is different from simple discounting, to the extent that there’s more than one type of thing being sold. Then it all branches off into two types.

Type One bundling is where the things in the  bundle work together, are meant to work together, yet can be bought in isolation as well. Integrated music systems made from components are a classic bundle play, where you can choose to buy the components in isolation or together; here, bundling is the process where you are incentivised to buy the components together. To me this is “good” bundling, since it appears to bring the technique of volume discounting to volumes constructed of heterogeneous-yet-related things. Everybody wins.

Type Two bundling is insidious: it is one where a bunch of disparate things are grouped together, a “job lot” as it were; in order to buy part of the lot you need to buy the whole lot. Crudely speaking, this form of bundling makes you buy things you don’t want to buy, as part of the process of your buying things you do want to buy.

Such bundling can have unintended consequences. Many years ago, I worked for the City branch of a US computer firm; our offices were in Queen Victoria St, on what was termed “warehouse” leases. [In a warehouse lease, the offices are expected to be largely empty between the hours of 7pm and 7am]. I was working late one night, and there was a call from the City Police. One of my colleagues took the call. It transpired that a large consignment of equipment manufactured by the firm had been found in the Thames while it was being dredged for some mysterious reason. Mysterious indeed. The story could not be hidden any longer. Salesmen were incentivised to sell bundles of equipment that included desktop calculators, we had tons of them. They could not join the “100% Club” and fly to exotic locations unless they met the bundle target. So they sold the calculators that no one wanted. When their customers refused to buy any more they started giving them away. When their customers refused to take any more, even for free, the salesmen did the only thing possible. Quietly and almost-ritually, salesmen would mosey down to the Samuel Pepys on the river for an evening drink or five, and then, just before going home, consign their week’s allocation of calculators to the river. Quietly. And create fictitious customers for those calculators, paying for the equipment out of their own pocket.

So the analog world can have, and has had, problems with Type Two bundling. Yet this type of bundling is routine in many publishing circles: the newspaper and magazine industries tend to practise it, the cable television industry uses it, and the music industry thrives on it. Want to buy a track you like? Buy the album. Decisions as to which tracks to spin off as singles, which tracks to impose on the public as B-sides, which tracks to use to sell the albums by refusing to unbundle them from the album.

I’m one of these people who believes that progress is possible. So when I see retrogressive steps being taken between the analog and the digital world, I am less than happy. I have yet to meet the customer for whom region coding on a DVD is a valuable thing to have. In the same way, take football. The round-sphere variety. If I want to support a team in analog terms, I can buy a season ticket. Go see all the home games. Go see a few away games. And maybe a knockout cup run if the team does well. Yes, that’s what I can do in analog terms. Now try buying the same thing in a digital world. Doesn’t exist. Why? Because someone somewhere, not a customer, decided to bundle things differently.

All this is changing, as has been evinced in the world of music. People are pushing back against Type Two bundling, with predictable results. So album sales go down and singles sales go up. [Obviously there are exceptions, albums that have a consistent selection of good songs, where it could be argued that Type Two bundling is not taking place. Many of the albums I grew up with in the late 1960s and early 1970s weren’t really “job lot” collections of disparate things; instead, they were holistic collections of things that went together, concept albums, rock operas, concert sessions, and the like].

If you want to read more about the effects of unbundling on the music industry, you should check out Anita Elberse’s work at Harvard Business School. She published an excellent analysis of the current state of play in a November 2009 paper entitled Bye Bye Bundles: The Unbundling of Music in Digital Channels, a copy of which can be downloaded from here. My thoughts about bundling as an aspect of digital curation became much clearer after reading her article.

I tend to think that this “unbundling” is a critical backdrop to the issue of digital curation. People want to access specific digital things: the song, the clip from the film, the article from the magazine, the section from the newspaper. They are no longer interested in the analog wrap of irrelevance they had to put up with before. And this could have some interesting side effects: it would appear that you can no longer subsidise the weaker parts of your music/news/journalism output by joining them up with the stronger parts. On the internet, sometimes people do know you’re a dog.

Which in turn affects album sales/magazine sales and so on. And their associated revenues. People want to buy the small things, loosely joined.

Curators add to relevance by stripping away the irrelevant and the unneeded and the shoddy.

In order to improve consume-ability and relevance, curators need the tools to do this. There are two ways these tools will come about, the “nice” way and the “nasty” way. In the nice way, the producers and distributors make it easy for people to point to, package and pass on the relevant pieces. Capisce?

Type Two bundling is nothing more than the pig of artificial scarcity wearing the lipstick of producer-driven choice. And you know my views on that: every artificial scarcity will be met with an equal and opposite artificial abundance.

Thinking about democratised curation

I was invited to participate in a panel at the Google Zeitgeist event in the UK last month; it was a real privilege, it gave me the chance to listen to many good speakers, watch some fascinating demos and meet a whole bunch of people who challenged my thinking. Thank you Google, particularly Nikesh Arora, as well as the team led by Dan Cobley.

As with any conference where good things are said, I walked away with a litany of soundbites, some of which I tweeted live. But there was one that I did not tweet, one that I’ve had reason to continue to ponder, one that forms the kernel of this post. Eric Schmidt, Google CEO, had this to say (to be found at 19:48 in this video):

“…. the statistic that we have been using is between the dawn of civilisation and 2003, five exabytes of information were created. In the last two days, five exabytes of information have been created, and that rate is accelerating. And virtually all of that is what we call user-generated what-have-you. So this is a very, very big new phenomenon.”

It’s important to understand scale. As a child I had some real difficulty visualising the size of an atom. Until I read a book that said something like “if you take the carbon atoms contained in just one full stop on this printed page and laid them out end to end in a straight line,  that line would extend from the earth to the sun and beyond”. So, while I knew that the amount of information being produced was accelerating, and that too at an increasing rate, I didn’t really have an appreciation of the scale. Now I do, and I’m grateful to Eric Schmidt for that.

What it made me do was think. Think about why there’d been a quantum shift in scale. And the answers that came to me were predictable and simple: the tools for creating information had really become democratised, and as a result the number of people empowered to “create information” had grown in multiple orders of magnitude. With no sign of Moore’s Law coming to a screeching halt for at least another 20 years, it is reasonable to suppose that the phenomenon will continue.

And continue it will. Because the changes don’t stop there. It’s not just the tools for creating information that have been democratised, the tools for distributing it have been democratised as well. As Kevin Kelly kept reminding us, the internet is a very efficient copy machine.

If that wasn’t enough, the tools for consuming information have also been democratised, initially by the PC, then by the mobile phone, then by broadband and wireless broadband, and now by the convergence of all of these, the smartphone and tablet.

Production, distribution and consumption of all forms of digital information: text, music, image, video: have all been democratised. So why should the curation of these be any different?

Digital curation seems to be a richer form of curation than its analog equivalent. Here’s what I think it consists of:

  • Authenticity
  • Veracity
  • Access
  • Relevance
  • Consume-ability
  • Produce-ability

Let me try and explain a little further.

  • Authenticity: Confirming the provenance of the item, that it was created by the person or persons claimed. That the person credited wrote the book or article. That the singer or band sang the song. That the actor or director made the movie. And so on and so forth. Traditional media sources were quite used to doing this, and should be able to continue to do this.
  • Veracity: Confirming the “truth” of the item, in the sense of the “facts” represented. That the news item has been verified. That the photograph hasn’t been doctored. That the voice hasn’t been dubbed. You know what I mean. Again, something that traditional media are quite used to doing, something they should continue to do.
  • Access: Andrew Savikas, in an article in O’Reilly TOC some  time ago, mooted the idea of Content As  A Service. My takeaway from it was simple. People do not pay for the “content” of a song or clip on iTunes as much as they pay for the convenience of getting to the item quickly and with a minimum of fuss. One could argue that traditional media had a role to make it simple and convenient for us to consume analog content, and that they will be able to adjust to the new world accordingly.
  • Relevance: Now it gets a little more interesting, touching on interests and aspirations, on preferences and profiling. Something that the analog world was poor at, something that traditional media didn’t really take up in the digital world. Can be done in many ways, some involving technology, some involving humans. And some involving both. Ad-based relevance is becoming harder and harder to sustain; curation via social networks seems to work, and to work well.
  • Consume-ability: This covers a whole shopping-trolley of concepts right now, and I’m going to have to work on it. I use it to mean device-agnostic availability of the digital content, so that I don’t have to use an iPod to listen to music from iTunes. I use it to mean ease of comprehension, whether through the use of visualisation tools like heatmaps or wordles or tag clouds or charts or whatever. I use it to mean tools to simplify (and sometimes even enrich) the content, via translation, via summarising, via hyperlinks, via mashups (especially those that add location or time contexts). I use it to mean the use of tools like Layar and Retroscope. [Incidentally, I plug these technologies completely unashamedly. Both Maarten and Chris are friends, but that’s not why I blog about them. I blog about them because they’re brilliant!]
  • Produce-ability: We’ve only just begun to appreciate a return to the Maker culture, something that people like Tim O’Reilly, Dale Dougherty, Cory Doctorow, Larry Lessig et al have been yelling about for some time now. The industrial-revolution-meets-central-broadcast woolly mammoth of the last 150 years seems incapable of recognising the significance of the small mammals currently underfoot. So that model is destined to go the way of all mammoths. Soon we will look at things in terms of how easy they are to get under the hood of, how easy they are to adapt, mutate, mangle, make something completely new out of. Which is why the rules of engagement will change. Intellectual property rights will be recast. Yes, will. There is no longer a choice, just the illusion of time. It is over. Period.

Production, consumption and distribution of information have already been democratised. There’s no turning back. Curation will go that way. Which means that the very concept of the expert, the professional, the editor, the moderator of all that is great and good, changes.

And yes, we have to consider whether the internet makes us smart. Whether the internet makes us smarter.

The emphasis should be on us. Us.

[It’s one a.m. now and I’m tuckered out. Time to publish and, if necessary, be damned. Let me know what you think. There’s a lot more where this came from, but I want to know if you’re interested before I share it.]

What we share: Continuing to look at privacy, sideways

We now have a growing and fascinating array of tools with which to share information with others, “social” tools. Having spent some time recently thinking about why we share (posts here and here), I wanted to spend some time sharing my thoughts with you on the topic of what we share; in a few days’ time, I will spend some time looking at the question of whom.

I think there’s an overarching principle here: everything we share should be for the edification of someone. It should build someone up, should encourage someone, should help someone learn something of value, should assist someone in doing something they’re interested in doing.

There has to be a someone in mind. Even if that someone is you.

So that’s the first filter. Is what I am about to share capable of edifying someone? If the answer is no, then I resist the temptation to share.

The next filter is related to the precise nature of the information that is being shared. I try and think of the information as belonging to one or more of the following classes:

  • Environmental alerts and signals: location info, climate info, traffic info, that sort of thing
  • Social object analysis: reviews and ratings of books, films, restaurants, songs, shows, plays, etc
  • Noteworthy pointers: links to news items, articles, blogs, even RTs, particularly news and views related to my network of relationships
  • Activity narratives: What I’m listening to, what I’m doing, what I’m eating, what I’m watching, what I’m reading
  • Human-powered search and assistance: Basically a cry for crowdsourced help.
  • Mood and presence indicators: Available or busy signals, online or offline indicators, and so on.

I try and remind myself what the nature of the information is, just to get a feel for what I’m doing and why I’m doing it. If I can’t figure it out, I stop. [In reality this is not a mechanical exercise, it happens very fast because it becomes instinctive and intuitive over time].

What this clsssification does is to simplify my approach to the next filter, that of “ownership” and confidentiality.

Am I free to share this information with others on an unrestricted basis? Is the information really mine to share with others? This is a critical issue. Take a simple example. Let’s say I have your personal mobile phone number. What I really have is a loan of your number giving me the right to use it, rather than an inalienable right to pass on to others. Liberty is not licence. So that means I cannot always share what I am doing, because I cannot assume that others I’m with are happy to have their whereabouts and activities shared in public. I have to think about it.

Which brings me to the next filter. Will what I am sharing have an adverse effect on anyone? When I look at something like Twitter, I am disappointed with the number of people who share minute-by-minute football scores, for example. This comes under the heading of “spoilers”. We live in an age where many people time-shift their interaction with many forms of entertainment, and we have to make sure that we do not impede their ability to continue doing this. So film plots, book plots, sports scores, TV series developments, these are all areas where we have to exercise careful judgment. [In this context, I love the way imdb has clearly signalled spoiler alerts in their reviews.]

This then moves me on to quite a hard question, how often should I share? And you know the honest answer? Only experimentation will tell. From what I’ve seen so far, people appear to have different tolerance levels for frequency in different sharing environments. If I look simply at Twitter, Facebook and LinkedIn, the sense I get is that people tolerate a high level of update on Twitter, a considerably lower level on Facebook and a significantly lower level on LinkedIn. This may not be the intent of the site and function designers, but it is what is suggested by the feedback I’ve received so far.

Some people asked me to cut off the link between my tweets and the facebook status. So I did. Others felt disappointed when I did that, and told me so. There wasn’t much I could do. I suggested they follow me via Friendfeed or directly via Twitter. Oddly enough, no one complained when Friendfeed disappeared into the Facebook stable. More recently, as it became possible for LinkedIn to display everything from everywhere, people started pinging me and asking whether I’d turn down my update frequency. So I did, primarily by cutting off direct connections between Twitter and LinkedIn. Again, some people complained.

The way forward appears to revolve around the use of hashtags, so now I use #fb and #in to signal where else I want my tweet to show. It’s kludgy, but it will do for now. In a perfect world I would not want this to be a publisher activity, it should be a subscriber choice. The publisher would encode tweets by theme or topic, the subscriber would only pull the thematic tweets that the person was interested in.

You see, someone who likes my food tweets may be completely uninterested in my music tweets. Someone who is interested in my book reviews may be left untouched by my cricket stories. So somewhere I have to encode outputs, and somewhere the subscriber has to select filters. That’s where we will have to head.

Which brings me to my final point for this post, the How much filter? We’re used to the term Too Much Information, but how do we do something about it? And here again only time will tell, experimentation is required before the conventions will evolve.

Right now, there is a simple continuum, twitter to tumblr (or equivalent) on to blog on to book (or equivalent). But that may change, as people seek to extend twitter size, reduce blog size, whatever.

So there it is. We should share things that edify people. We should have some idea of how this edification takes place. We should ensure we have the right to share the information. We should take care about unintended consequences and adverse effects; and we should keep a keen eye on overall frequency and length.

We’re still learning about all this. Ad-hoc conventions will emerge, evolve, mutate. The important thing is to be aware of these issues, because then we can have informed discussions about sharing and privacy and the social implications and how we create value.