Improving my vision: Some views on Microsoft’s Open Specification Promise

Ambrose Bierce, in The Devil’s Dictionary, defined a cynic as follows:

A blackguard whose faulty vision sees things as they are, not as they ought to be. Hence the custom among the Scythians of plucking out a cynic’s eyes to improve his vision.

Many years later, Albert Einstein defined common sense as “the collection of prejudices acquired by age eighteen”.

As I grow older, I realise that however hard I try to keep an open mind, and to learn, I land up with anchors and frames and perspective-biases that I don’t always know I have. Which means that sometimes I have to work hard to ensure that I don’t lapse insidiously into cynicism.

So you can understand why I had to work very hard indeed when analysing the Microsoft Open Specification Promise that was published yesterday. If you’re interested in the subject, then please do check out Kim Cameron’s blog here,  Doc’s piece at IT Garage (where he asks for your opinion as well) and Phil Windley’s blog here, along with Becker and Norlin’s Digital ID World blog at ZDNet.

Microsoft are not known for their pioneering approaches in the opensource world. Identity is one of the three big issues that affects our ability to deliver the promise of today’s technology (the other two are Intellectual Property/Digital Rights and the “internet”, with or without Stevens’ Tubes). A valid solution for identity pretty much needs Microsoft’s support and that of its legions of lawyers.
And so we come to the Open Specification Promise. My early reactions? I think Kim Cameron and his team have done a brilliant job at pulling this off and getting something workable past the lawyers’ cynosure.

If you want to understand it, and don’t particularly feel like wading through “implication, exhaustion, estoppel or otherwise” (and who could blame you?), then skip the legalese and go straight to the Frequently Asked Questions section. I quote from the FAQs:

  • The Open Specification Promise is a simple and clear way to assure that the broadest audience of developers and customers working with commercial or open source software can implement specifications through a simplified method of sharing of technical assets, while recognizing the legitimacy of intellectual property.
  • We listened to feedback from community representatives who made positive comments regarding the acceptability of this approach.
  • Q: Why did Microsoft take this approach?
  • A: It was a simple, clear way, after looking at many different licensing approaches, to reassure a broad audience of developers and customers that the specification(s) could be used for free, easily, now and forever.
  • Q: How does the Open Specification Promise work? Do I have to do anything in order to get the benefit of this OSP?
  • A: No one needs to sign anything or even reference anything. Anyone is free to implement the specification(s), as they wish and do not need to make any mention of or reference to Microsoft. Anyone can use or implement these specification(s) with their technology, code, solution, etc. You must agree to the terms in order to benefit from the promise; however, you do not need to sign a license agreement, or otherwise communicate your agreement to Microsoft.
  • Q: What is covered and what is not covered by the Open Specification Promise?
  • A: The OSP covers each individual specification designated on the public list posted at http://www.microsoft.com/interop/osp/. The OSP applies to anyone who is building software and or hardware to implement one or more of those specification(s). You can choose to implement all or part of the specification(s). The OSP does not apply to any work that you do beyond the scope of the covered specification(s).

We have a long way to go before we can solve all this. We’re not going to solve all this unless we stop acting like cynics. So let’s get behind Kim Cameron on this and see what happens. That’s what I’m going to do.
An aside: Why can’t legal agreements be written like FAQ sections? Is there a law against it? 

Learning about social software

One thing I have found to be consistently true for social software is the immense value of experimenting with every form of it. You don’t know what you can do with “it”, (whatever “it” is) until you try.

I remember being told when I was eight years old that the ancient Greeks had major arguments about aspects of gravity; the arguments centred around a two-stone model, one big and one small. They assumed that the big stone would fall faster than the small one, taking the feather analogy to its extreme. But after that, they were lost. One school suggested that the resultant “stone” was bigger and would fall faster than the big stone. The other said that the small stone would slow down the big stone and therefore the resultant “stone” would be slowed down in comparison to the big stone in isolation.

The detail doesn’t matter. What matters is that they never tried it. Just talked about it.

And it is with this in mind that I recommend you take a look at BizPredict. Thanks to Erick Schonfeld of Business 2.0 for letting me know.

Whether it’s blogs or wikis or social networks or prediction markets or better tags or identity or intention or whatever, we all need to figure out what happens by playing with it. What governance models work. What privacy issues emerge. What unusual uses humankind finds for all this. What the ecosystems look like, how they evolve.

More on social software and consensus

A few days ago I wrote about David Freedman’s piece in Inc magazine, where he,  in Carr-like fashion, suggested that collaboration doesn’t work, that crowds don’t have wisdom, that workgroups fail most often when they’re faced with making a decision. I took some issue with the statements.
I then suggested a number of false or weak forms of consensus, seeking to make the point that real consensus requires trust and commitment, and showing how social software could help us achieve this.

I realise I missed out an important evil form of consensus. Silent and tacit consensus. The Elephant In The Room Without Any Clothes.

And, in a typically serendipitous bloglike way, where do I get the kernel for this post? The same David Freedman. This time, writing in the latest issue of Newsweek on new directions in cancer research.

I quote from the article:

  • Vogelstein notes that cells with genetic scrambling can already be picked up in the blood of cancer patients, which suggests that catching cancer early may end up a matter of a routine blood test. That in itself is a hurdle for researchers, though. “Early diagnosis is undervalued in the research community, because prevention isn’t as dramatic as curing,” says Vogelstein. “Pharmaceutical companies are more interested in treatment, because they make drugs, and they account for a large part of the cancer-research budget.” And so much time, money and expectation have been staked on the oncogene approach that abandoning it would be a demoralizing admission of defeat and, in many cases, a career sinker. “The way science works is, when you end up backing a theory you can’t afford to be wrong or your grant will suffer,” says UCLA researcher Jeffrey H. Miller.
  • Many scientists and funding administrators often simply choose to ignore a promising avenue of research until pressured to do so; careers are more easily advanced by sticking with accepted paths even when they may be wrong. That places the ball squarely in the public’s court, says Benjamin Djulbegovic, a researcher at the University of South Florida who studies clinical trials of new cancer therapies. “There’s dissonance between what researchers study and what patients need,” he says. “When there are competing research agendas, there needs to be public discourse on who should control those agendas.”

I’m not really picking on Freedman, he’s just reporting what researchers and scientists have told him.

For cancer research read complex project. How many times have you seen, or even participated in, a project that was hopelessly wrong from the start, or where fundamentally better options emerge midstream? How many times have you seen teams continue down such blind alleys because they genuinely believe that any other route represents the end of their careers?

Just look at what is being said:

  • Careers are more easily advanced by sticking with accepted paths even when they may be wrong.
  • There’s dissonance between what researchers study and what patients need.

Here’s another place where social software can help in enterprises and even across enterprises. Better connect between customer and designer, patient and researcher. More transparency in the status of projects and programmes, real status reports rather than political Office mashups. A genuine ability to put your hand up and say “but daddy, he’s got no clothes on”.

People who raise their heads above the parapets tend to get shot. This I realise and understand. We already have a number of cases of blogger bashing. But I can’t help feeling that this is changing, and that the change is being brought into existence by the openness and transparency that social software affords us.

Soon, an enterprise that reacts unwisely to truths emanating from their internal and external social software implementations, will pay a heavy market price for their actions. Values count; actions that define values count even more.

A fable about DRM

The kernel for this post is a story on Amazon’s Unbox by David Berlind on ZDNet. As he calls it, more C.R.A.P. [Thanks, David. And my regards to Dan.]
Read it and weep, because we’re all due to get so much more of the same. And guess what? We’ve been here before big time. Bear with me and you will see.

People get very emotional when it comes to IPR and DRM. Everyone’s up in arms: the “content creators”, the “content funders”, the “content publishers”, the “content carriers”, the “content-receiving-device manufacturers”, the “content-receiving-device’s-operating-system-creator”, the list is endless…. I haven’t even got any space for Stevens’ Tubes.
So it’s all about “content”. Apparently. How I detest that word.

Let me try and talk about DRM in a completely “content-free” context. Note: Everything that follows is just an attempt to place the issue of DRM in a different perspective, frame the argument differently. So please don’t bother critiquing the stuff on historical authenticity, I am not claiming this to be some deeply erudite history of computing over the last three or four decades. Just an attempt to frame DRM issues in an unemotional context.

Enterprises.

Let’s take a look at enterprise architectures.

Thirty years ago, enterprise architectures were simple. We had IBM. And we had the BUNCH. [Burroughs, Univac, NCR, Control Data and Honeywell]. The Bunch weren’t Wild, they were for the most part also-rans in IBM’s shadow. And every enterprise chose one vendor. It does not matter how that choice was made, the point is there was only one.

They all rallied round one flag. Shift tin. The money was in the tin. They gave away the software and the services.

Life was good. -Ish, anyway. You didn’t have to worry about enterprise application integration. It was the vendor’s problem.

Not everyone needed a mainframe, or could afford a mainframe. So people time-shared. Or did without.
Then the “dirty guys” [see Aside] wandered in. The midrange brigade. Digital and Data General et al. They built minicomputers, and the firms that did without now had an option. They could buy minicomputers or lease them or go without. So they did, they bought the stuff in droves. Exciting times. The Soul of A New Machine. [Aside: When IBM entered the midrange marketplace, I seem to remember a wonderful ad that took a headline from one of the broadsheets, possibly the Wall Street Journal, saying “IBM to clean up dirty end of market”. This banner headline dominated the top of the ad, looking like it was crudely torn out of the paper. Then there was a lot of white space. And in smaller letters at the bottom were the words “The bastards say ‘Welcome’ “. I think it was Data General.]

Life remained good. Ish. You still didn’t have to worry about enterprise application integration. Most enterprises remained resolutely single-vendor, at least partly because software and services were virtually free.

Unfortunately for the vendors, a few things started happening. Moore’s Law had taken hold, Metcalfe’s Law was getting into gear, and both AT&T as well as IBM were getting into antitrust trouble. With their attention occupied elsewhere, AT&T went and gave away Unix. And IBM gave away Microsoft. After all, software and services were nothing.
But Moore’s Law marched on, Metcalfe did his bit, the PC revolution was in full swing, and to cap it all there were more versions of “unix” about than the population of China. Calling themselves “open systems”.

Now life got complicated. Everyone wanted in. There were “program package” companies, database companies, systems integrators, network specialists, everyone.

And they got everywhere. Enterprises weren’t single-vendor havens of peace any more. Hybrid architectures blossomed everywhere, made worse by a glut of mergers and takeovers and diversification strategies. And making things work wasn’t easy any more.

So everyone started charging for what used to be free. A very painful period. And a new industry was born. Enterprise Application Integration.

Crudely put, EAI was the price you paid for getting to the stuff you had already paid for, because everyone had made sure that you couldn’t. But they were boom years and the enterprises paid up. Sometimes grudgingly, but they paid up.

And life was good. For the vendors and integrators, that is. Not for the enterprises.

People realised that this was a mess, and that there was a need for open standards to make things easier. So standards bodies popped up everywhere. And were immediately taken over by the only people who had money. The vendors and integrators. So standardisation didn’t happen. And the enterprises quietly cried in their sleep. And kept paying up.
Moore and Metcalfe marched on. Bloatware took up the slack. So did EAI. And a bunch of consultants riding that gravy train to hell, reengineering everything. If it moved, reengineer it. If it didn’t, reengineer it anyway. And the enterprises continued to wail and gnash their teeth. Some didn’t make it. The rest paid up.

Time for a few more new industries. One that focused on telling people there was no longer any business value in IT. Which was true for the enterprise, but definitely not true for the vendors and consultants. One that focused on wage arbitrage. And of course good ol’ Linux.

Somewhere in between, the World Wide Web [an aside, is www the only known case of an abbreviation that has twice the number of syllables as the long form?] came in, and set the scene for another whole new industry.

But let me stop there for the purposes of this fable.

Enterprises spent, and continue to spend, an enormous amount of money trying to integrate applications, trying to get to the data they “own”, their “content”, and trying to do things with that data. And DRM 1.0, the proprietary nature of all the stacks, made this happen. Many people made money from this, but not the customer. The enterprise. And many enterprises went to the wall as a result of this shambles.

People did push back, but it’s taken a very long time for us to get anywhere close to an open standards open platforms opensource software ecosystem. And we’ve not there yet, not by a long chalk.

Now, as telephony becomes software, as the internet joins Moore and Metcalfe and Gilder, we have DRM 2.0 coming our way.

But guess what? This time the enterprise is not the customer.

The individual is the customer.

Individuals, in comparison to enterprises, have a far lower getting-conned threshold.

What DRM 2.0 seeks to do is to recreate the walled gardens, the vendor lock-ins, the wonderful annuities that EAI, or DRM 1.0 provided. Annuities that destroyed value for all bar the vendors and consultants the first time around.

So imagine EAI is IAI, Individual Application Integration. Or leave it as EAI, Entertainment Application Integration.

Welcome to DRM 2.0.

My gut feel is that my own generation, the ones who paid through their noses for EAI/DRM first time around, the ones that were constantly told that IT has no business value, we’re not going to do anything about it. We’re so used to being shafted that we are in “Take a Number” mode.

And we make the enterprise decisions today, so we will probably implement EAI/DRM 2.0 and go through the nightmare again. Stockholmers.

But not Generation M. They can see the stupidity [I hope and pray they do]. So I watch them with interest, wondering whether they will be able to do what we failed to do. Because they can.

An addendum: How will enterprises implement EAI/DRM 2.0? By doing the wrong thing on identity, on permissioning, on authentication. By doing the wrong thing on security. By doing the wrong thing on platform independence. By doing the wrong thing on Internet Protocol. By doing the wrong thing. Grandma, what sharp teeth you have.

And that’s why I spend time thinking about IPR and DRM and Identity in an enterprise context. Because it’s easy to be wrong. Sure there are good vendors out there, good consultants out there, good software providers, good telcos, good device manufacturers. But they are few and far between.
Every fable should have a moral at the end of it, I guess.

The moral of this fable is that with DRM 1.0, the content-creators, the enterprises, were the primary losers. The vendors and consultants and intermediaries all said “this is good for the enterprise”. It was good for them. But not for the enterprise. Hmmm.

On social software and consensus

Have you read David Freedman’s recent piece in Inc. Magazine’s September 2006 edition? You should. It’s been doing the rounds in some of the microconversations I participate in, and I’m glad it came across my radar screen.
Freedman headlines his article What’s Next: The Idiocy of Crowds. OK, that got my attention.

He then starts the article with:

Collaboration is the hottest buzzword in business today. Too bad it doesn’t work.

Now I wasn’t just attentive. I was committed. Very intrigued. Very very intrigued.

Here’s some more quotes to intrigue you:

  • As James Surowiecki nicely puts it in the title of his best-selling book, it’s “the wisdom of crowds,” and it’s a glorious thing. Or it would be, if it weren’t for just one little problem: The effectiveness of groups, teamwork, collaboration, and consensus is largely a myth. In many cases, individuals do much better on their own. Our bias toward groups is counterproductive. And the technology of ubiquitous connectedness is making the problem worse.
  • What he glosses over, however, is the often spectacular way groups fail in the context of organizations.
  • Things only get worse when a team is charged with actually making a decision.
  • What’s more, these electronic group decisions can be even more brain-dead than in-person meetings. The biggest problem: the fear of dissenting is magnified in a Web, e-mail, or instant messaging exchange, because participants know their comments can be saved and widely distributed. Instead of briefly offending six people at a meeting, you have the chance to enrage thousands.
  • According to a recent article in The Guardian, every three seconds a Wikipedia page is rendered inaccurate–or more inaccurate than it was to begin with–by a hoaxer, ignoramus, or malcontent.

By the time I finished reading through the article, I wasn’t intrigued any more. And not incensed either. More like apathetic. So why do I post this? The usual reason. This kind of thinking will gain currency, especially when amplified by traditional media, unless people like us push back.

Freedman manages to take a swipe at collaboration and teamwork, at social software, at democratised innovation, the whole nine yards. Where he does give praise, it has the feeling of being faint and damning.

So here’s my take on Freedman’s piece:

1. Deciding isn’t doing

There was a book written many years called Five Frogs on a Log which first described this analogy to me. Five frogs, sitting on a log. Four decide to jump off. How many are left? Five, because deciding isn’t doing :-)

We should not confuse group decision making with groups working. Collaboration and consensus are two entirely different things.

2. Consensus comes in many forms

  • Simple Voting, with an agreement that the vote is binding on the group, no chads anywhere
  • Weighted Voting, same as Simple except that the boss is always right
  • Soft consensus, agreement amongst all those who cared or bothered to turn up
  • Filibuster consensus, agreement from all present in response to being bored out of their wits
  • Real consensus, where group members are given the opportunity to voice their concerns and misgivings in private, prior to a group decision that all will support in public

3. Real consensus is made far more possible by the use of social software
Patrick Lencioni, in his “fable” The Five Dysfunctions of a Team, makes the point far better than I could; all I am doing is placing the argument in the context of social software:

  • Unless there is trust within a team people are unwilling to be open
  • Unless people are open they won’t express their doubts and misgivings
  • Unless they express their doubts and misgivings they won’t feel they were party to the decision
  • Unless they feel party to the decision they won’t commit to it
  • Unless they commit to it execution is all but impossible

Read the original; my summary doesn’t do it justice. Lencioni’s arguments are very useful in the context of social software. Here’s an offbeat reason why:

In the past, firms had relatively low attrition. Job mobility was low, and gold watches and carriage clocks abounded. In such circumstances, consensus really worked. People voiced their misgivings openly, then went with the majority view after debate. Why? Because they knew that their peers and bosses would remember their sacrifice and their commitment to the team. Institutional memory.

Now, as we live with far higher attrition and job mobility, the institution has no memory any more. Sacrifices are not worth anything, people move, bosses move, there’s an I’m All Right Jack Every Man For Himself culture. Consensus is therefore hard to achieve, and infinite loops of decision-making and second-guessing and third-guessing abound.

In addition to the institutional memory problem (which is primarily about sacrifice) there is also a time-and-space problem. The infinite loop mentioned above is now able to move to new levels of infinity a la Cantor. People are often travelling and in different timezones.

Social software can be used to solve all this. It captures the context, helping absentees “get up to speed”. It captures the conversation, allowing concerns and issues to be aired and recorded. Institutional memory is therefore preserved regardless of attrition. It solves the space and time problems, along with more modern “unproductivity tool” problems such as Which Version Are You Looking At? and I Can’t See That On My BlackBerry and “Sorry, my signal is fading and I have to sign off”.

So Mr Freedman:

  • Deciding isn’t doing.
  • Consensus comes in many forms.
  • Social software aids consensus.