One thing I have found to be consistently true for social software is the immense value of experimenting with every form of it. You don’t know what you can do with “it”, (whatever “it” is) until you try.
I remember being told when I was eight years old that the ancient Greeks had major arguments about aspects of gravity; the arguments centred around a two-stone model, one big and one small. They assumed that the big stone would fall faster than the small one, taking the feather analogy to its extreme. But after that, they were lost. One school suggested that the resultant “stone” was bigger and would fall faster than the big stone. The other said that the small stone would slow down the big stone and therefore the resultant “stone” would be slowed down in comparison to the big stone in isolation.
The detail doesn’t matter. What matters is that they never tried it. Just talked about it.
And it is with this in mind that I recommend you take a look at BizPredict. Thanks to Erick Schonfeld of Business 2.0 for letting me know.
Whether it’s blogs or wikis or social networks or prediction markets or better tags or identity or intention or whatever, we all need to figure out what happens by playing with it. What governance models work. What privacy issues emerge. What unusual uses humankind finds for all this. What the ecosystems look like, how they evolve.
The best parable about experimenting comes from an old B.C. comic strip (views of modern life in a caveman setting). Peter, the “intellectual” in the cast of characters, is dragging a forked stick on the ground. He announces to everyone, “I shall now prove to you idiots that parallel lines never meet.” He shows the parallel lines drawn by the stick and continues, “I shall walk around the earth; and, when I return, you will see that the lines are still apart. … See you later.” So he walks around the earth; and, as he walks, (you guessed it) the ends of the stick wear down from the friction. When he returns, the stick has worn down so much that the two forks are gone. So there Peter stands, in from of the same crowd, with his stick now drawing a single line, looking very sheepish!
Before the publication of “guides” for “dummies” and “complete idiots” became an industry, there was this really great book called HOW TO LIE WITH STATISTICS. It was one of the best introductions to what statistics could tell you and, more importantly, what they could NOT tell you. The B.C. parable really needs to be expanded into a book called HOW TO LIE WITH EXPERIMENTS (or, perhaps even better, HOW TO LIE WITH SCIENTIFIC METHOD). There is certainly a whole book’s worth of content on improper experimentation, beginning with design and going all the way to interpretation of results.
The purpose of this cautionary rant is to draw a distinction between EXPERIMENT and EXPERIENCE. Having had to supervise the management of a mini-conference on persistent conversation, I know that there are some good minds out there who know about designing experiments for social software, but I am not sure how much all that experimentation has done beyond advancing individual publication records.
So what is the other side of the coin? I would say it is based on anecdotal accounts of experiences. This has assets and liabilities. The most important liability is that one cannot generalize from a single instance, but the fact that the anecdote is grounded in a single instance gives it a concreteness that is lacking in most experimental results. Also, the narrative of the account, like any narrative, can be interpreted in many ways. Is that an asset or a liability? I do not believe there is a hard-and-fast answer to that question. Back in 1984 Eric Eisenberg wrote an excellent paper about “strategic ambiguity,” explaining the strategic assets of different people coming away from an encounter with different interpretations. I think this applies to how we interpret narrative accounts of experiences, particularly with new technologies.
My punch line, then, is that you will probably learn more from experiential accounts than you will from experimental results; but do not assume that what you learn is “truth.” All we can ever do is gather evidence. How we use that evidence will always be highly situated, and our situations will always be changing? As long as we are comfortable living in a world of “evidence without conclusions,” all will be well!
But after the experiments were done the Physics started. Now we don’t know what really works in Social Software, the models tried at the first incarnation – groupware – failed, but it does not mean that there are no rules and we can only gather outcomes of individual experiments and we can never have any theory. There is a theory and we should strive to discover it. We need to play with our models, assumptions about what really plays role and what we can throw out from the model to simplify it.