Mea culpa.
Usually I read The Edge. As with First Monday, it is one more place on the web that I go to in order to find things that challenge me, that teach me, that stop me from seeping slowly into inertia.
But for some reason, probably sheer forgetfulness, I subscribed to First Monday from the get-go, but not to The Edge.
So it took Steven Levy and Newsweek, via an article headlined Mao’s Revenge,(for some reason the online version is headlined Poking a Stick Into the Hive Mind to alert me to Jaron Lanier‘s essay last May, on Digital Maoism. I find it hard to figure out why I didn’t see anything about it anywhere else, given the importance of the debate and the nature of the participants in that debate. So mea culpa again.
Two asides. One, I couldn’t resist linking Lanier’s name to his wikipedia entry; there is an imp in every one of us :-). And two, what I am doing is itself a departure from the Information is Power mindset of old. I am meant to keep these sources to myself and then appear wise and learned while regurgitating stuff from my secret sources. That’s what the old model was about. Hidden sources. Privileged access. Exclusions. A sham wisdom. In the blogosphere we opensource not just our ideas but also all our sources. Because we don’t need to rely on such trickeries as hidden sources.
I need time to read through all that has been discussed, by enough luminaries to fill an Ivy League faculty and more. There’s a lot of useful stuff in there, stuff I believe some of you will enjoy digging through. So dig away.
If I haven’t finished reading it, why am I breaking from my norm and just linking to the stuff? Because it’s now in the mainstream, as a result of the Newsweek coverage, and we need to act. Collectively :-) Before the mainstream accept his view as the norm. Because they will. I can see reprints being ordered now and becoming part of every enterprise pantheon on social software. Unless we respond.
I like a lot of what Lanier usually says. But this time he brings his guns to bear on all the traditional criticisms of social knowledge and citizen media: lowest common denominator; dumbing down; propaganda; hive mentalities and drone thinking; the whole shooting match. And, sadly, he agrees with the critics. Thankfully, he too has critics, and the Edge does a good job in putting their points of view across.
Please do read it if you’re at all interested in the subject. I promise to comment in detail sometime soon, for whatever it’s worth.
Let me end by saying that any medium of expression has the capacity to be subverted into a propaganda machine. The internet is not a medium of expression, it is a place. A marketplace. Of conversations. And so it has a capacity for dissent that is unrivalled.
That’s why crowds can be wise. It is in the capacity for dissent, and the free exercising of that capacity, that collective wisdom is formed.
That’s why crowds can be wise: The feature of crowds that diminishes dissent, especially reasoned dissent, in favor of energy, of will and power, is also why crowds can be deeply unwise.
It’s not an either/or question of old vs. new mindset, nor even a dialectic of the two in which one eventually triumphs because it has to; instead the rise of the conversational market signals a paradox as old as human society is at work again. We have so many options if we open our eyes to them. There’s no need to assume one way is the right way.
How do we get the best of every system without making all systems other than our favored one the Enemy? I believe the answer is not to be arbitrary about any system’s value and question everything, though Jaron’s as good a place to start as any.
Hi Mitch, good to see you here. I agree with you, but tend to feel that the traditional systems are too well established for social software to be given a fair chance; which means we have to defend its right to exist and show what it can do.
Jaron’s post makes some very good points, but runs the risk of strangling other options at birth.
So I press on.
The best thing about a wiki is that it provides an interface capable of accumulating vast amounts of information on an infinite number of topics. The problem is that anybody can edit a Wiki there are some people that will and can create malicious or funny edits. Of course there are also those who think they know everything about something, but really don’t have a clue.
Wikipedia should be the ultimate information source. one day they will be monitored from time to time to keep things from getting too far out of hand.
There’s a piece that’s missing here. The irrational behaviour of humans. That can be for the good but I’ve yet to see examples with which I’d be comfortable in the business world. It’s about being shaken out of belief systems. A very difficult thing to achieve.
I think you hit on a key point here, Dennis.
Traditional systems of communication and management are biased towards past-predicts-the-future. Irrational behaviour does happen, but magically disappears in the cold light of spreadsheet and presentation.
Many of these tools seek to use intent and opinion as a present and future predicts the future model.
That shift is momentous and causes anguish.
Whenever something is in the past, it becomes fact rather than unreason. So we con ourselves.
Prediction markets are a classic new form. But companies find it hard to accept, because they are an aggregation of irrationality in their eyes. More later.
Wikipedia:Verifiability: This page is an official policy on the English Wikipedia; last modified 7 September 2006.
The threshold for inclusion in Wikipedia is verifiability, not truth. “Verifiable” in this context means that any reader must be able to check that material added to Wikipedia has already been published by a reliable source, because Wikipedia does not publish original thought or original research…
“Verifiability” in this context does not mean that editors are expected to verify whether, for example, the contents of a New York Times article are true. In fact, editors are strongly discouraged from conducting this kind of research, because original research may not be published in Wikipedia. Articles should contain only material that has been published by reliable sources, regardless of whether individual editors view that material as true or false. The threshold for inclusion in Wikipedia is thus verifiability, not truth.
A good way to look at the distinction between verifiability and truth is with the following example. Suppose you are writing a Wikipedia entry on a famous physicist’s Theory X, which has been published in peer-reviewed journals and is therefore an appropriate subject for a Wikipedia article. However, in the course of writing the article, you contact the physicist and he tells you: “Actually, I now believe Theory X to be completely false.” Even though you have this from the author himself, you cannot include the fact that he said it in your Wikipedia entry.
Why not? Because it is not verifiable in a way that would satisfy the Wikipedia readership or other editors. The readers don’t know who you are. You can’t include your telephone number so that every reader in the world can call you for confirmation. And even if they could, why should they believe you?
For the information to be acceptable to Wikipedia you would have to persuade a reputable news organization to publish your story first, which would then go through a process similar to peer review. It would be checked by a reporter, an editor, perhaps by a fact-checker, and if the story were problematic, it might be checked further by the lawyers and the editor-in-chief. These checks and balances exist to ensure that accurate and fair stories appear in the newspaper.
It is this fact-checking process that Wikipedia is not in a position to provide, which is why the no original research and verifiability policies are so important. If the newspaper published the story, you could then include the information in your Wikipedia entry, citing the newspaper article as your source