Introduction
This post is a follow-up to one I wrote a few days ago; based on the twitter and mail feedback, and on the comments I’ve received via blog and facebook, it seemed worthwhile to continue discussions on this train of thought.
Summary of previous post
Let me first summarise where I was trying to go with the previous post:
- It’s normal and natural for human beings to “publish” signals that can be shared
- Signals can be of many types: alerts and alarms, territorial markers, calls for action, pure unadulterated information
- Signals can be shared in small groups or made available to all
- As the tools for sharing improve, and as they become accessible by more and more people, the sharing of signals will grow
- Twitter and Chatter are the leading examples of consumer and enterprise tools for sharing signals
- All this could have significant benefits for us, at home, at work and at play (that’s if we know the difference any more), particularly as we grasp the value of knowledge worker cognitive surplus
The role of technology in all this
At the outset I want to make sure people understand that technology is nothing more than an enabler to all this. People must want to share, to make productive use of their cognitive surplus. It may sound altruistic, it may sound Utopian, all this emphasis on sharing; we all have to get it into our heads that man is a social animal; we all have to get it into our heads that sharing creates value, both at home and at work; short-sighted, ill-thought-out and sometimes downright nefarious approaches to “intellectual property rights” over the last fifty years or so have blinkered people from seeing this fundamental point.
Sharing is a very people-centric concept, part of our culture, part of our values. In a sense these posts are not about Twitter or Chatter, but about the existence of toolsets that make sharing easier, more accessible, more affordable. If we start believing that it’s all about the technology, we will start convincing ourselves that events in Tunisia and Egypt and Libya were about the technology rather than the will of the people. So hey, let’s be careful out there.
When I was young, I was taught that there were three “waves” to technology adoption. In wave 1, there was a substitution effect: people used the new technology to do something they used to do with something else: substituting the horse for the car. In wave 2, there was an “increased use” effect: A horse would take you maybe 40 miles in a day at best, a car could take you 400 miles in that same day. So people could travel further. And in wave 3, we had “embedded use”, where the technology was an intrinsic part of a new product or service, unseen before: smartcards are an example.
The Nineties and the dot.com boom led to a newish taxonomy for markets and products and services, as startups and venture capitalists desperately tried to put labels on things: there was a flowering of “categories” and “category-busters”. Maybe I’ve read too much Kevin Kelly (though I don’t really think that is possible): I would strongly recommend every one of his books, right up to his latest “What Technology Wants”. Over the years it became clear I was beginning get hung up on seeing various aspects of techn0logy strictly from an evolutionary standpoint
This came to a head recently when I was reading Tim Flannery’s Here On Earth, another wonderful book. Go out and buy it now. Stop reading this post. Come back later. You won’t regret it. Anyway, let me tell you about the a-ha moment I had while reading that book.
Technology speeds up evolution
In Here On Earth, Flannery spends some time talking about technology as a means of radically speeding up evolution; the way I interpreted what he was saying, he was comparing the time taken for species to grow armour or fangs or claws with the time taken for humans to make suits of armour and spears.
This resonated a lot with me, because for the past two years I’ve been writing a book about information seen from the perspective of food: information ingredients, preparation of information, nutrients in information, toxic information, how information is processed, the concept of information waste, all around the fulcrum of an information diet. Part of the stimulus for writing that book was gaining a deeper understanding of cooking as an “external stomach”.
Implications for the pheromone concept as applied to Twitter and Chatter
With all this in mind, it is not enough for us to visualise tweets as pheromones, copying nature; we should look further, see what we can do that we could not have done before, extending what nature shows us. I’ve tried to build a small list to help people think this through:
- Pheromones aren’t archivable, indexable, searchable, retrievable
- Pheromones can’t be analysed for historical trends
- Pheromones can’t be mashed
- Segmenting pheromones isn’t easy
- So what we do with digital pheromones should do all this, overcome the constraints of natural pheromones
An aside: Whenever I try and learn something, I start with understanding similarities with other things I know, then I concentrate on the differences. In a way that’s what I’m doing with this post; the previous post concentrated on similarities between tweeting and pheromones, this one will concentrate on differences and why they’re valuable.
Histories, trending and analytics
Unlike pheromones, tweets are digital and can easily be archived, stored, tagged and classified, searched for, retrieved at will. So the signals are more easily accessible to a larger group of people. In this context, we have to think of the signalling aspect of tweets, from an economics viewpoint, as “extreme nonrival goods”….. one person’s use of the signal does not impact the ability of others to use the same signal. Obviously there are constraints to do with scarcities and abundances: an ant can follow a trail to a store of food only to find that the store has been used up; physical things tend to obey laws of scarcity, while digital things tend to obey laws of abundance. [Don’t get me started on the abominations taking place in the Digital Economy Act space; that’s for another post, another time.]
There are no time series for pheromone tracks, nor an easy way to find out “what’s trending now”. These are very powerful capabilities in the twittersphere, as long as the data is there, accessible and exportable. The implications for what used to be called “knowledge management” are radical and extreme. Visualisation tools become more and more important, in terms of tag clouds and heat maps and radar diagrams and the like.
Understanding the location implications of the signals is also very valuable. Some years ago I was told that every desktop in Google had its latitude and longitude embedded in the desktop ID; this became very valuable for doing analysis of things like prediction markets, when you want to see the impact of physical adjacency on the results.
But that was in another country, and besides the wench is dead. Today, we live in times when workspaces no longer need to have desks, so the concept of the desktop becomes more and more questionable. Location becomes something dynamic, which is why the GPS or equivalent in smartphones and tablets is coming to the fore.
The physics is different
Many years ago, when I was first venturing into virtual worlds, I remember reading an article which really struck me at the time. What it said was that in virtual worlds, “the physics is different”. With no gravity, no heat, no light, no cold, there was no reason you couldn’t fly or starve or walk around naked for that matter.
There’s a bit of the-physics-is-different about tweets. The pheromone concept tends to deal with signals given by a single author, then amplified (or allowed to decay) by other authors either overlaying the signal or avoiding it. Which means in effect that the signals are aggregated in the same physical area and in the same period of time. Tweets don’t face that constraint. As a result, we can daisy-chain tweets by a single person over a period of time, “geographically dispersed” in space or by logical context. Or we can aggregate all tweets sharing some characteristic or the other. Or, for that matter, blocking out all tweets that share specific characteristics.
This ability to tune in or tune out at such a level of granularity is of critical value, particularly when it comes to filtering.
The need for good filtering
If everyone tweeted and everything tweeted, soon all would be noise and no signal. As Clay Shirky said, there is no such thing as information overload, there’s only filter failure. In other words, information overload is not a production problem but one of consumption.
This is important. Too often, whenever there is a sense of overload, people start trying to filter at the production point. In a publish-subscribe environment, this translates to asking the publisher to take action to solve the problem. My instinct goes completely against this. I think we should always allow publishing to carry on unfettered, unhampered, and that all filtering should take place at the edge, at the subscriber level. There’s something very freedom-of-expression and freedom-of-speech about it. But it goes further: the more we try and concentrate on building filters at publisher level, the more we build systems open to bullying and misuse by creating central bottlenecks. Choke points are dangerous in such environments.
It is far better to build filters at subscriber level. Take my twitter feed, @jobsworth. Most of my tweets are about four things: my thoughts about information, often related to blog posts; the food I’m cooking and eating; the music I’m listening to; and my summaries and reports on conferences and workshops and seminars. [I tend not to tweet at sports events because of the spoiler risk].
So while there is a fairly low underlying tweet level, my twitter activity is bursty, lumpy. At weekends it goes up as I play music at blip.fm/jobsworth; when I’m at conferences it can go up to 50 tweets an hour; and, also usually at weekends, when I’m cooking, the tweet frequency goes up. This lumpiness is uncomfortable for some people, they find that every now and then I dominate their tweetspace.
As a result, over the years, a number of people have asked me whether I can suppress one or the other, e.g. by cutting the link between twitter and facebook, blip.fm and twitter, et cetera, or, more often, asking me to fragment my twitter ID, have one for food and one for music and so on and so forth.
Letting the subscriber do the filtering
This is not a radical idea. The whole point of personalisation is that it takes place at the edge and not at the core. So, to solve the lumpiness problem, we need better subscriber tools. With such tools, you should be able to say, I want to follow @jobsworth’s conference tweets and his food tweets but not his music tweets, while someone else says the precise opposite. And all I will have to do is to ensure that the 21st century equivalent of hashtags is used to segment and categorise and classify my tweets. Implementing filters at publisher level is a broadcast concept, and, furthermore, runs the risk of misuse: every time you build a choke point, someone will come along and try to exert undue influence over that choke point. We shouldn’t have governments and quasi-governments telling publishers and ISPs what to publish and what not to publish, not in lands where words like “free” have any meaning.
Lifestreaming implications for workstreaming
I’ve used tweets as a generic term, not bothering to differentiate between Twitter and Chatter. The same things hold true for both Twitter as well as Chatter, in most cases. But there are a few strategic differences.
Firstly, unlike Twitter, Chatter tends to operate in a space between systems of engagement and systems of record (see my post on this distinction here); in enterprises, identity is subject to strict verification, making this simpler to do. Secondly, a Chatter world is likely to have many inanimate publishers, and the “asymmetric follow” ( a term I first saw used by James Governor of RedMonk) becomes important. [Think about it for a minute. You get spam in your email? Boo-hoo. How much of your email spam is actually formal and internal to your company, rather than as a result of malign external forces?]
The ability to have strong verification of identity in a corporate context has many benefits, since it is then possible to have formal attributes, values and characteristics associated with an individual. This is where “gamification” and “badges” start having real tangible value in the enterprise, with different classes of badges, some personally asserted, some bestowed by a third party, some “earned” as a result of completing an activity or activities successfully.
And on to Part 3
Again, I shall wait for feedback, and if people continue to be interested, I shall take this further. This time I intend to concentrate on the lumpiness of knowledge work; why workstreaming, in combination with a couple of other techniques, can make sensible use of the cognitive surplus; how this will allow enterprises of all sizes to move away from traditional politically charged blame-cultures to genuine value-builders.
And most importantly, I want to discuss why lifestreaming and workstreaming actually make us smarter human beings, in comparison with the dumbing-down that took place during the Industrial Age with its assembly line, division of labour, broadcast mindset, built on economics of scarcity, hierarchical to the extreme. Assembly line. Division of labour. Broadcast mindset. Scarcity-focused. Hierarchical in nature. Five constructs that have destroyed education, healthcare and government, and will soon destroy all industry. If we let them.
Like this:
Like Loading...