Image above courtesy of Library of Congress, Prints and Photographs Online Catalog.
I’ve never been worried about information overload, tending to treat it as a problem of consumption rather than one of production or availability: you don’t have to listen to everything, read everything, watch everything. As a result, when, some years ago, I heard Clay Shirky describe it as “filter failure”, I found myself nodding vigorously (as us Indians are wont to do, occasionally sending confusing signals to onlookers and observers).
Filtering at the point of consumption rather than production. Photo courtesy The National Archives UK.
Ever since then, I’ve been spending time thinking about the hows and whys of filtering information, and have arrived “provisionally” at the following conclusions, my three laws of information filtering:
1. Where possible, avoid filtering “on the way in”; let the brain work out what is valuable and what is not.
2. Always filter “on the way out”: think hard about what you say or write for public consumption: why you share what you share.
3. If you must filter “on the way in”, then make sure the filter is at the edge, the consumer, the receiver, the subscriber, and not at the source or publisher.
What am I basing all this on? Let’s take each point in turn:
a. Not filtering at all on inputs
One of the primary justifications for even thinking about this came from my childhood and youth in India, surrounded by mothers and children and crowds and noise. Lots of mothers and children. Lots and lots of mothers and children, amidst lots and lots of crowds. And some serious noise as well. Which is why I was fascinated by the way mothers somehow managed to recognise the cry of their own children, and could remain singularly unperturbed, going placidly about their business amidst the noise and haste. This ability to ignore the cries of all the other babies while being watchful and responsive to one particular cry fascinated me. Years later, I experienced it as a parent, nowhere near as good at is as my wife was, but the capacity was there. And it made me marvel at how the brain evolves to do this.
There are many other justifications. Over the years I’ve spent quite a lot of time reading Michael Polanyi, who originally introduced the “Rumsfeld” “unknown unknowns” concept to us (the things we know we know; the things we know we don’t know and the things we don’t know we don’t know). I was left with the view that I should absorb everything like a new sponge, letting my brain work out what is worth responding to, what should be stored for later action, what should be discarded. And, largely, it’s worked for me. Okay, so what? Why should my personal experience have any bearing on this? I agree. Which is why I would encourage you to read The Aha! Moment: The Cognitive Neuroscience of Insight, by Kounlos and Beeman. Or, if you prefer your reading a little bit less academic, try The Unleashed Mind: Why Creative People are Eccentric. In fact, as shown below, the cover of the latest issue of Scientific American MIND actually uses the phrase “An Unfiltered Mind” when promoting that particular article.
b. Filtering outputs
We live in a world where more and more people have the ability to publish what they think, feel or learn about, via web sites, blogs, microblogs and social networks. We live in a world where this “democratised” publishing has the ability to reach millions, perhaps billions. These are powerful abilities. And with those powerful abilities comes powerful responsibilities. Responsibilities related to truth and accuracy, responsibilities related to wisdom and sensitivity. Responsibilities related to curation and verification. None of this is new. Every day we fill forms in with caveats that state that what we say is true to the best of our knowledge and ability; every day, as decent human beings, we take care not to offend or handicap people because of their caste, creed, race, gender, age. Every day we take care to protect minors, to uphold the confidentiality of our families and friends and colleagues and employers and trading partners and customers. Sometimes, some of these things are enforced within contracts of employment. All of them, however, should come under the umbrella term “common decency”.
These principles have always been at the forefront of cyberspace, and were memorably and succintly put for WELL members as YOYOW, You Own Your Own Words. Every one of us does own our own words. Whatever the law says. It’s not about the law, it’s about human decency. We owe it to our fellow humans.
When we share, it’s worth thinking about why we share, something I wrote about here and here.
c. Filtering by subscriber, not by publisher
Most readers of this blog are used to having a relatively free press around them, despite superinjunctions and despite the actions taken to suppress Wikileaks. A relatively free press, with intrinsic weaknesses. Weaknesses brought about by largely narrow ownership of media properties, weaknesses exacerbated by proprietary anchors and frames, the biases that can corrupt publication, weaknesses underpinned by the inbuilt corruptibility of broadcast models. Nevertheless, a relatively free press.
The augmentation of mainstream media by the web in general, and by “social media” in particular, is often seen as the cause of information overload. With the predictable consequence that the world looks to the big web players to solve the problem.
Which they are keen to do.
Google, Facebook, Microsoft et al are all out there, trying to figure out the best way of giving you what you want. And implementing the filtering mechanisms to do this. Filtering mechanisms that operate at source.
There is a growing risk that you will only be presented with information that someone else thinks is what you want to see, read or hear. Accentuating your biases and prejudices. Increasing groupthink. Narrowing your frame of reference. If you want to know more about this, it is worth reading Eli Pariser’s book on The Filter Bubble. Not much of a reader? Then try this TED talk instead. Jonathan Zittrain, in The Future Of The Internet and How to Stop it, has already been warning us of this for a while.
Now Google, Microsoft, Facebook, all mean well. They want to help us. The filters-at-source are there to personalise service to us, to make things simple and convenient for us. The risks that Pariser and Zittrain speak of are, to an extent, unintended consequences of well-meaning design.
But there’s a darker side to it. Once you concentrate solely on the design of filterability at source, it is there to be used. By agencies and bodies of all sorts and descriptions, ranging from less-than-trustworthy companies to out-and-out malevolent governments. And everything in between.
We need to be very careful. Very very careful. Which is why I want to concentrate on subscriber-filters, not publisher-filters.
Otherwise, while we’re all so busy trying to prevent Orwell’s Nineteen Eighty-Four, we’re going to find ourselves bringing about Huxley’s Brave New World. And, as Huxley predicted, perhaps actually feeling good about it.
More to follow. Views in the meantime?