Image above courtesy of Library of Congress, Prints and Photographs Online Catalog.
I’ve never been worried about information overload, tending to treat it as a problem of consumption rather than one of production or availability: you don’t have to listen to everything, read everything, watch everything. As a result, when, some years ago, I heard Clay Shirky describe it as “filter failure”, I found myself nodding vigorously (as us Indians are wont to do, occasionally sending confusing signals to onlookers and observers).
Filtering at the point of consumption rather than production. Photo courtesy The National Archives UK.
Ever since then, I’ve been spending time thinking about the hows and whys of filtering information, and have arrived “provisionally” at the following conclusions, my three laws of information filtering:
1. Where possible, avoid filtering “on the way in”; let the brain work out what is valuable and what is not.
2. Always filter “on the way out”: think hard about what you say or write for public consumption: why you share what you share.
3. If you must filter “on the way in”, then make sure the filter is at the edge, the consumer, the receiver, the subscriber, and not at the source or publisher.
What am I basing all this on? Let’s take each point in turn:
a. Not filtering at all on inputs
One of the primary justifications for even thinking about this came from my childhood and youth in India, surrounded by mothers and children and crowds and noise. Lots of mothers and children. Lots and lots of mothers and children, amidst lots and lots of crowds. And some serious noise as well. Which is why I was fascinated by the way mothers somehow managed to recognise the cry of their own children, and could remain singularly unperturbed, going placidly about their business amidst the noise and haste. This ability to ignore the cries of all the other babies while being watchful and responsive to one particular cry fascinated me. Years later, I experienced it as a parent, nowhere near as good at is as my wife was, but the capacity was there. And it made me marvel at how the brain evolves to do this.
There are many other justifications. Over the years I’ve spent quite a lot of time reading Michael Polanyi, who originally introduced the “Rumsfeld” “unknown unknowns” concept to us (the things we know we know; the things we know we don’t know and the things we don’t know we don’t know). I was left with the view that I should absorb everything like a new sponge, letting my brain work out what is worth responding to, what should be stored for later action, what should be discarded. And, largely, it’s worked for me. Okay, so what? Why should my personal experience have any bearing on this? I agree. Which is why I would encourage you to read The Aha! Moment: The Cognitive Neuroscience of Insight, by Kounlos and Beeman. Or, if you prefer your reading a little bit less academic, try The Unleashed Mind: Why Creative People are Eccentric. In fact, as shown below, the cover of the latest issue of Scientific American MIND actually uses the phrase “An Unfiltered Mind” when promoting that particular article.
b. Filtering outputs
We live in a world where more and more people have the ability to publish what they think, feel or learn about, via web sites, blogs, microblogs and social networks. We live in a world where this “democratised” publishing has the ability to reach millions, perhaps billions. These are powerful abilities. And with those powerful abilities comes powerful responsibilities. Responsibilities related to truth and accuracy, responsibilities related to wisdom and sensitivity. Responsibilities related to curation and verification. None of this is new. Every day we fill forms in with caveats that state that what we say is true to the best of our knowledge and ability; every day, as decent human beings, we take care not to offend or handicap people because of their caste, creed, race, gender, age. Every day we take care to protect minors, to uphold the confidentiality of our families and friends and colleagues and employers and trading partners and customers. Sometimes, some of these things are enforced within contracts of employment. All of them, however, should come under the umbrella term “common decency”.
These principles have always been at the forefront of cyberspace, and were memorably and succintly put for WELL members as YOYOW, You Own Your Own Words. Every one of us does own our own words. Whatever the law says. It’s not about the law, it’s about human decency. We owe it to our fellow humans.
When we share, it’s worth thinking about why we share, something I wrote about here and here.
c. Filtering by subscriber, not by publisher
Most readers of this blog are used to having a relatively free press around them, despite superinjunctions and despite the actions taken to suppress Wikileaks. A relatively free press, with intrinsic weaknesses. Weaknesses brought about by largely narrow ownership of media properties, weaknesses exacerbated by proprietary anchors and frames, the biases that can corrupt publication, weaknesses underpinned by the inbuilt corruptibility of broadcast models. Nevertheless, a relatively free press.
The augmentation of mainstream media by the web in general, and by “social media” in particular, is often seen as the cause of information overload. With the predictable consequence that the world looks to the big web players to solve the problem.
Which they are keen to do.
Google, Facebook, Microsoft et al are all out there, trying to figure out the best way of giving you what you want. And implementing the filtering mechanisms to do this. Filtering mechanisms that operate at source.
There is a growing risk that you will only be presented with information that someone else thinks is what you want to see, read or hear. Accentuating your biases and prejudices. Increasing groupthink. Narrowing your frame of reference. If you want to know more about this, it is worth reading Eli Pariser’s book on The Filter Bubble. Not much of a reader? Then try this TED talk instead. Jonathan Zittrain, in The Future Of The Internet and How to Stop it, has already been warning us of this for a while.
Now Google, Microsoft, Facebook, all mean well. They want to help us. The filters-at-source are there to personalise service to us, to make things simple and convenient for us. The risks that Pariser and Zittrain speak of are, to an extent, unintended consequences of well-meaning design.
But there’s a darker side to it. Once you concentrate solely on the design of filterability at source, it is there to be used. By agencies and bodies of all sorts and descriptions, ranging from less-than-trustworthy companies to out-and-out malevolent governments. And everything in between.
We need to be very careful. Very very careful. Which is why I want to concentrate on subscriber-filters, not publisher-filters.
Otherwise, while we’re all so busy trying to prevent Orwell’s Nineteen Eighty-Four, we’re going to find ourselves bringing about Huxley’s Brave New World. And, as Huxley predicted, perhaps actually feeling good about it.
More to follow. Views in the meantime?
I agree strongly on the last two points, particularly 3). For broad awareness, a good (i.e. divers) selection of people, news and other sources are the best “weak signal amplifiers” to increase the likelyhood that something interesting or important said by someone you do not follow is eventually brought to your attention. If it’s important or if you have tuned your interest filters well, someone you do follow will tall about it.
On 1) “no filtering on inputs” I believe it depends a lot on context. If it is general awareness, fine. If it is in a business context, I believe it is highly desirable to provide some means for selective focus – on a business activity (content) or person – as well as to provide a means of notification for particularly important items of interest. For example, the activity stream of a large organization is so dense that the likelyhood of loosing a significant signal in the noise is too high.
Your point on mothers recognizing their own children among the general noise and chatter is well taken – but the distance separating mother and child is itself a filter. Small children can and should close enough to be in light, almost continuous sight and sound – and what parent would not immediately react to their child’s cry of fright or pain. The chatter of people – and other’s children – hundreds of feet away blends to form a mummering background, not distinguishable signals.
Some recent thoughts on selective focus and notification, see
http://traction.tractionsoftware.com/traction/permalink/Blog1672
You often cite Gordon Moore, Robert Metcalfe and George Gilder in your posts. I think on this subject, Jon Postel deserves some credit for his insight in RFC 793 Section 2.10
– http://tools.ietf.org/html/rfc793#section-2.10, Sep 1981
This is usually taken to mean that you should try to accept any transmission whose meaning is decipherable, but ensure that your own emitted transactions are properly constructed and have value to the network.
This implies we should do enough filtering on the way in to reject meaningless (or ambiguous) messages. On the way out we should not only filter but also take some care to ensure our own meaning is well-expressed.
Infotention: You might be interested in this mini-course
http://howardrheingold.posterous.com/a-mini-course-on-infotention
Flip from firehoses and filters to shopping and brands, the 18-34 demographic has abandoned the fire-hosed shopping mall, instead is shopping at premium outlet malls indexed by brands http://www.premiumoutlets.com/
Back to information firehoses and filters, currently social is the interjected equivalent of brand, helping us locate information we want to consume from the firehoses. Yet even on Empire Avenue, social doesn’t feel quiet the equivalent to a brand.
Is there an information filter like a brand?
We all filter on input. It’s usually time as the filter – if we don’t have the time, we don’t spend time absorbing the data. The issue with the Internet is there is so much data coming in, and most of us have less time, we end up filtering more.
On point 3, I have been anxious for a long time that Google has been filtering results to consumers/ users/ readers – which leaves (especially ecommerce and publishers) totally dependent on Google as the gatekeeper/ income stream. Although this is always the problem with a monopoly, the fact that the Internet has no passing trade (in retail speak) magnifies Google’s positioning.
Your point on resisting input filters is spot on, I think, especially pertaining to information in the public domain. On an individual level, we have the idea pervading now that we do well to release our personal input filters – all experiences and data are welcome unfettered into the postmodern mind, considered equally viable, and, I think, many of them, wreaking much havoc.
Mcluhan, while I don’t subscribe to many of his notions, presciently (in the 60’s I believe) talked about the emergence of “corporate” man following what he called the “private man” era; as man entered the networked age, he claimed, the speed of information travel would allow for group think to resolve more rapidly around certain ideas in turn, leading to a more fractious and polarized culture.
@greg I am still concerned about blinkering and its impact on creativity, the risk of tunnel thinking, the lack of serendipitous events. so while I understand why people would want to filter, and why people do filter, I think that perhaps we filter too much. Because we can.
@dom couldn’t agree more. And always happy to signal our debut to Postie. I should do it more often.
@howard excellent. Took me a while to assimilate the three videos and concept map. thank you. As usual I have learnt something valuable from you. Something that’s been happening since I started reading Whole Earth Review decades ago! Thank you again
@clive perhaps we are already at the point where the “brand” is the collective aggregation of what we feel about a company at a given time, rather than what they try and convince us we should feel….
@bradley see Howard’s comment earlier, and watch his mini=course on infotention. The attention aspect deals with your time variable from the filtering perspective.
@brian I found it helpful to watch Howard Rheingold’s three 7 minute videos on Infotention (see his comment earlier) and also the concept map. Helped solidify and structure some of my own thinking on filters and attention.
I don’t believe Google can truly filter-at-source completely. For general searches perhaps, and within their news results mostly, but none completely.
Google filters search results based upon an algorithm that is kept confidential, yet regularly ‘worked around’ by savvy internet users, including the News results. Much like security against computer viruses, any attempt to ‘control’ is a constant and escalating endeavor.
The more filtered Google becomes, the more likely a competing resource will evolve. I’m reminded that Northern Light was once crowned king and internet directories ruled the landscape.
I see current mobile internet as both deja vu and in search of a ruler.