Musing about perfect markets, perfect information and rational behaviour

I read Economics at university. Many years ago. And my father used to keep telling me that the most dangerous phrase he’d ever heard an economist use was “Let us assume that…”.

So when I studied perfect markets and perfect information and rational behaviour, I understood the assumptions and understood that the assumptions were wrong. But it didn’t matter, or so I thought, since all we were doing was building theoretical models.

When it came to “perfect information”, I was naive enough to believe that the only constraints to perfect information existed in the technologies used to transport that information. It was only as I began to understand how organisations worked that I realised just how naive I was.

But there were parts of me that still believed, And so as Moore and Metcalfe and Gilder marched on, and the web became reality, I could see a way of using social software to inch towards perfect markets in some very specific niches.

Two niches in particular.

I wanted to be able to build a list of requirements using a wiki, and I wanted to be able to go through the search, price discovery, and fulfilment stages of purchasing something that meets those requirements via a blog.

Which brings me to this story in the Telegraph blog, pointed to me via Ross and Alexis. Thanks, guys.

When we look at the problems of requirements capture and their consequent impact on project costs and delivery, we need to look at ways to improve this process. We understand about time-boxing and time-placing, we understand about scope creep and requirements creep, we understand about extreme programming, pair programming, fast iteration. So why can’t we see that we can capture, share, iterate and evolve requirements much more effectively using wikis? I’m confused.

There ought to be a law that says “Information tends to go corrupt when hidden, and tends to corrupt those who participate in the process of hiding the information.”

We waste so much in the procurement process for the same reasons. We don’t use the tools we have to discover what’s out there. We don’t make the process a participative one. We make it worse by allowing the tenderers better access to the requirements than anyone else. I’m confused.

As with wikipedia and with the celebrity blogs, there will always be vandals, some in the interests of art, some in the interests of “freedom”, some for the heck of it.

But you don’t shut down record stores because Banksy makes a statement about Paris Hilton.

You don’t shut down museums because Marcel Duchamp puts a moustache on a copy of La Gioconda.

So why do we do this? Why do we have so much fear of perfect information? So much so we blame the tools, the people, everything.

Improving my vision: Some views on Microsoft’s Open Specification Promise

Ambrose Bierce, in The Devil’s Dictionary, defined a cynic as follows:

A blackguard whose faulty vision sees things as they are, not as they ought to be. Hence the custom among the Scythians of plucking out a cynic’s eyes to improve his vision.

Many years later, Albert Einstein defined common sense as “the collection of prejudices acquired by age eighteen”.

As I grow older, I realise that however hard I try to keep an open mind, and to learn, I land up with anchors and frames and perspective-biases that I don’t always know I have. Which means that sometimes I have to work hard to ensure that I don’t lapse insidiously into cynicism.

So you can understand why I had to work very hard indeed when analysing the Microsoft Open Specification Promise that was published yesterday. If you’re interested in the subject, then please do check out Kim Cameron’s blog here,  Doc’s piece at IT Garage (where he asks for your opinion as well) and Phil Windley’s blog here, along with Becker and Norlin’s Digital ID World blog at ZDNet.

Microsoft are not known for their pioneering approaches in the opensource world. Identity is one of the three big issues that affects our ability to deliver the promise of today’s technology (the other two are Intellectual Property/Digital Rights and the “internet”, with or without Stevens’ Tubes). A valid solution for identity pretty much needs Microsoft’s support and that of its legions of lawyers.
And so we come to the Open Specification Promise. My early reactions? I think Kim Cameron and his team have done a brilliant job at pulling this off and getting something workable past the lawyers’ cynosure.

If you want to understand it, and don’t particularly feel like wading through “implication, exhaustion, estoppel or otherwise” (and who could blame you?), then skip the legalese and go straight to the Frequently Asked Questions section. I quote from the FAQs:

  • The Open Specification Promise is a simple and clear way to assure that the broadest audience of developers and customers working with commercial or open source software can implement specifications through a simplified method of sharing of technical assets, while recognizing the legitimacy of intellectual property.
  • We listened to feedback from community representatives who made positive comments regarding the acceptability of this approach.
  • Q: Why did Microsoft take this approach?
  • A: It was a simple, clear way, after looking at many different licensing approaches, to reassure a broad audience of developers and customers that the specification(s) could be used for free, easily, now and forever.
  • Q: How does the Open Specification Promise work? Do I have to do anything in order to get the benefit of this OSP?
  • A: No one needs to sign anything or even reference anything. Anyone is free to implement the specification(s), as they wish and do not need to make any mention of or reference to Microsoft. Anyone can use or implement these specification(s) with their technology, code, solution, etc. You must agree to the terms in order to benefit from the promise; however, you do not need to sign a license agreement, or otherwise communicate your agreement to Microsoft.
  • Q: What is covered and what is not covered by the Open Specification Promise?
  • A: The OSP covers each individual specification designated on the public list posted at http://www.microsoft.com/interop/osp/. The OSP applies to anyone who is building software and or hardware to implement one or more of those specification(s). You can choose to implement all or part of the specification(s). The OSP does not apply to any work that you do beyond the scope of the covered specification(s).

We have a long way to go before we can solve all this. We’re not going to solve all this unless we stop acting like cynics. So let’s get behind Kim Cameron on this and see what happens. That’s what I’m going to do.
An aside: Why can’t legal agreements be written like FAQ sections? Is there a law against it? 

Learning about social software

One thing I have found to be consistently true for social software is the immense value of experimenting with every form of it. You don’t know what you can do with “it”, (whatever “it” is) until you try.

I remember being told when I was eight years old that the ancient Greeks had major arguments about aspects of gravity; the arguments centred around a two-stone model, one big and one small. They assumed that the big stone would fall faster than the small one, taking the feather analogy to its extreme. But after that, they were lost. One school suggested that the resultant “stone” was bigger and would fall faster than the big stone. The other said that the small stone would slow down the big stone and therefore the resultant “stone” would be slowed down in comparison to the big stone in isolation.

The detail doesn’t matter. What matters is that they never tried it. Just talked about it.

And it is with this in mind that I recommend you take a look at BizPredict. Thanks to Erick Schonfeld of Business 2.0 for letting me know.

Whether it’s blogs or wikis or social networks or prediction markets or better tags or identity or intention or whatever, we all need to figure out what happens by playing with it. What governance models work. What privacy issues emerge. What unusual uses humankind finds for all this. What the ecosystems look like, how they evolve.

More on social software and consensus

A few days ago I wrote about David Freedman’s piece in Inc magazine, where he,  in Carr-like fashion, suggested that collaboration doesn’t work, that crowds don’t have wisdom, that workgroups fail most often when they’re faced with making a decision. I took some issue with the statements.
I then suggested a number of false or weak forms of consensus, seeking to make the point that real consensus requires trust and commitment, and showing how social software could help us achieve this.

I realise I missed out an important evil form of consensus. Silent and tacit consensus. The Elephant In The Room Without Any Clothes.

And, in a typically serendipitous bloglike way, where do I get the kernel for this post? The same David Freedman. This time, writing in the latest issue of Newsweek on new directions in cancer research.

I quote from the article:

  • Vogelstein notes that cells with genetic scrambling can already be picked up in the blood of cancer patients, which suggests that catching cancer early may end up a matter of a routine blood test. That in itself is a hurdle for researchers, though. “Early diagnosis is undervalued in the research community, because prevention isn’t as dramatic as curing,” says Vogelstein. “Pharmaceutical companies are more interested in treatment, because they make drugs, and they account for a large part of the cancer-research budget.” And so much time, money and expectation have been staked on the oncogene approach that abandoning it would be a demoralizing admission of defeat and, in many cases, a career sinker. “The way science works is, when you end up backing a theory you can’t afford to be wrong or your grant will suffer,” says UCLA researcher Jeffrey H. Miller.
  • Many scientists and funding administrators often simply choose to ignore a promising avenue of research until pressured to do so; careers are more easily advanced by sticking with accepted paths even when they may be wrong. That places the ball squarely in the public’s court, says Benjamin Djulbegovic, a researcher at the University of South Florida who studies clinical trials of new cancer therapies. “There’s dissonance between what researchers study and what patients need,” he says. “When there are competing research agendas, there needs to be public discourse on who should control those agendas.”

I’m not really picking on Freedman, he’s just reporting what researchers and scientists have told him.

For cancer research read complex project. How many times have you seen, or even participated in, a project that was hopelessly wrong from the start, or where fundamentally better options emerge midstream? How many times have you seen teams continue down such blind alleys because they genuinely believe that any other route represents the end of their careers?

Just look at what is being said:

  • Careers are more easily advanced by sticking with accepted paths even when they may be wrong.
  • There’s dissonance between what researchers study and what patients need.

Here’s another place where social software can help in enterprises and even across enterprises. Better connect between customer and designer, patient and researcher. More transparency in the status of projects and programmes, real status reports rather than political Office mashups. A genuine ability to put your hand up and say “but daddy, he’s got no clothes on”.

People who raise their heads above the parapets tend to get shot. This I realise and understand. We already have a number of cases of blogger bashing. But I can’t help feeling that this is changing, and that the change is being brought into existence by the openness and transparency that social software affords us.

Soon, an enterprise that reacts unwisely to truths emanating from their internal and external social software implementations, will pay a heavy market price for their actions. Values count; actions that define values count even more.

A fable about DRM

The kernel for this post is a story on Amazon’s Unbox by David Berlind on ZDNet. As he calls it, more C.R.A.P. [Thanks, David. And my regards to Dan.]
Read it and weep, because we’re all due to get so much more of the same. And guess what? We’ve been here before big time. Bear with me and you will see.

People get very emotional when it comes to IPR and DRM. Everyone’s up in arms: the “content creators”, the “content funders”, the “content publishers”, the “content carriers”, the “content-receiving-device manufacturers”, the “content-receiving-device’s-operating-system-creator”, the list is endless…. I haven’t even got any space for Stevens’ Tubes.
So it’s all about “content”. Apparently. How I detest that word.

Let me try and talk about DRM in a completely “content-free” context. Note: Everything that follows is just an attempt to place the issue of DRM in a different perspective, frame the argument differently. So please don’t bother critiquing the stuff on historical authenticity, I am not claiming this to be some deeply erudite history of computing over the last three or four decades. Just an attempt to frame DRM issues in an unemotional context.

Enterprises.

Let’s take a look at enterprise architectures.

Thirty years ago, enterprise architectures were simple. We had IBM. And we had the BUNCH. [Burroughs, Univac, NCR, Control Data and Honeywell]. The Bunch weren’t Wild, they were for the most part also-rans in IBM’s shadow. And every enterprise chose one vendor. It does not matter how that choice was made, the point is there was only one.

They all rallied round one flag. Shift tin. The money was in the tin. They gave away the software and the services.

Life was good. -Ish, anyway. You didn’t have to worry about enterprise application integration. It was the vendor’s problem.

Not everyone needed a mainframe, or could afford a mainframe. So people time-shared. Or did without.
Then the “dirty guys” [see Aside] wandered in. The midrange brigade. Digital and Data General et al. They built minicomputers, and the firms that did without now had an option. They could buy minicomputers or lease them or go without. So they did, they bought the stuff in droves. Exciting times. The Soul of A New Machine. [Aside: When IBM entered the midrange marketplace, I seem to remember a wonderful ad that took a headline from one of the broadsheets, possibly the Wall Street Journal, saying “IBM to clean up dirty end of market”. This banner headline dominated the top of the ad, looking like it was crudely torn out of the paper. Then there was a lot of white space. And in smaller letters at the bottom were the words “The bastards say ‘Welcome’ “. I think it was Data General.]

Life remained good. Ish. You still didn’t have to worry about enterprise application integration. Most enterprises remained resolutely single-vendor, at least partly because software and services were virtually free.

Unfortunately for the vendors, a few things started happening. Moore’s Law had taken hold, Metcalfe’s Law was getting into gear, and both AT&T as well as IBM were getting into antitrust trouble. With their attention occupied elsewhere, AT&T went and gave away Unix. And IBM gave away Microsoft. After all, software and services were nothing.
But Moore’s Law marched on, Metcalfe did his bit, the PC revolution was in full swing, and to cap it all there were more versions of “unix” about than the population of China. Calling themselves “open systems”.

Now life got complicated. Everyone wanted in. There were “program package” companies, database companies, systems integrators, network specialists, everyone.

And they got everywhere. Enterprises weren’t single-vendor havens of peace any more. Hybrid architectures blossomed everywhere, made worse by a glut of mergers and takeovers and diversification strategies. And making things work wasn’t easy any more.

So everyone started charging for what used to be free. A very painful period. And a new industry was born. Enterprise Application Integration.

Crudely put, EAI was the price you paid for getting to the stuff you had already paid for, because everyone had made sure that you couldn’t. But they were boom years and the enterprises paid up. Sometimes grudgingly, but they paid up.

And life was good. For the vendors and integrators, that is. Not for the enterprises.

People realised that this was a mess, and that there was a need for open standards to make things easier. So standards bodies popped up everywhere. And were immediately taken over by the only people who had money. The vendors and integrators. So standardisation didn’t happen. And the enterprises quietly cried in their sleep. And kept paying up.
Moore and Metcalfe marched on. Bloatware took up the slack. So did EAI. And a bunch of consultants riding that gravy train to hell, reengineering everything. If it moved, reengineer it. If it didn’t, reengineer it anyway. And the enterprises continued to wail and gnash their teeth. Some didn’t make it. The rest paid up.

Time for a few more new industries. One that focused on telling people there was no longer any business value in IT. Which was true for the enterprise, but definitely not true for the vendors and consultants. One that focused on wage arbitrage. And of course good ol’ Linux.

Somewhere in between, the World Wide Web [an aside, is www the only known case of an abbreviation that has twice the number of syllables as the long form?] came in, and set the scene for another whole new industry.

But let me stop there for the purposes of this fable.

Enterprises spent, and continue to spend, an enormous amount of money trying to integrate applications, trying to get to the data they “own”, their “content”, and trying to do things with that data. And DRM 1.0, the proprietary nature of all the stacks, made this happen. Many people made money from this, but not the customer. The enterprise. And many enterprises went to the wall as a result of this shambles.

People did push back, but it’s taken a very long time for us to get anywhere close to an open standards open platforms opensource software ecosystem. And we’ve not there yet, not by a long chalk.

Now, as telephony becomes software, as the internet joins Moore and Metcalfe and Gilder, we have DRM 2.0 coming our way.

But guess what? This time the enterprise is not the customer.

The individual is the customer.

Individuals, in comparison to enterprises, have a far lower getting-conned threshold.

What DRM 2.0 seeks to do is to recreate the walled gardens, the vendor lock-ins, the wonderful annuities that EAI, or DRM 1.0 provided. Annuities that destroyed value for all bar the vendors and consultants the first time around.

So imagine EAI is IAI, Individual Application Integration. Or leave it as EAI, Entertainment Application Integration.

Welcome to DRM 2.0.

My gut feel is that my own generation, the ones who paid through their noses for EAI/DRM first time around, the ones that were constantly told that IT has no business value, we’re not going to do anything about it. We’re so used to being shafted that we are in “Take a Number” mode.

And we make the enterprise decisions today, so we will probably implement EAI/DRM 2.0 and go through the nightmare again. Stockholmers.

But not Generation M. They can see the stupidity [I hope and pray they do]. So I watch them with interest, wondering whether they will be able to do what we failed to do. Because they can.

An addendum: How will enterprises implement EAI/DRM 2.0? By doing the wrong thing on identity, on permissioning, on authentication. By doing the wrong thing on security. By doing the wrong thing on platform independence. By doing the wrong thing on Internet Protocol. By doing the wrong thing. Grandma, what sharp teeth you have.

And that’s why I spend time thinking about IPR and DRM and Identity in an enterprise context. Because it’s easy to be wrong. Sure there are good vendors out there, good consultants out there, good software providers, good telcos, good device manufacturers. But they are few and far between.
Every fable should have a moral at the end of it, I guess.

The moral of this fable is that with DRM 1.0, the content-creators, the enterprises, were the primary losers. The vendors and consultants and intermediaries all said “this is good for the enterprise”. It was good for them. But not for the enterprise. Hmmm.