Thinking about opensource and VRM

For many years I’ve been of the belief that:

  • when a problem is generic look to the opensource community for the solution
  • when a problem is specific to a vertical market look to the commercial community
  • when a problem is unique to your organisation look to your own developers

You don’t have to be legalistic about it, this is just a rule of thumb and, at least to my warped mind, represents common sense.

The way I’ve phrased it, I may give the impression that the opensource community is incapable of solving vertical market problems. That is not the case. “Generic” is in the eye of the beholder: if there is sufficient scale then the opensource community will respond. It is the scale rather than the vertical-market-ness that determines this response.

Take a look at OpenMRS: a community-developed, open-source, enterprise electronic medical record system framework. It is based on Java, Hibernate, Tomcat, MySQL and XML, and runs via a browser.

OpenMRS is not unique. As far as I can make out, the Collaborative Software Initiative, which I first heard of via Dana Blankenhorn, was founded precisely to build vertical market apps and stacks in environments where the scale was attractive.

It is now a frightening five years since we started talking about “the missing opensource projects“. It is over four years since R0ml Lefkowitz gave his seminal presentation at OSBC 2004. Opensource is gently moving up the stack; gently being the operative word.

I cannot help but think that there is a direct and important correlation between this movement of opensource up the stack and the mushrooming of VRM. The VRM movement needs leverage, and this leverage cannot come from the existing “vendor” community. Of course there are enlightened people within the vendor community, and it is not my intention to disparage them. But you can’t break wind against thunder and expect an equitable outcome.

There is hope yet. The opensource community is moving up the stack, from generic to large-scale vertical. The VRM movement is gathering pace and momentum. Not surprisingly, there’s a lot of overlap between the two “communities”, if you can call them that. There is a difference, though; opensource is in well-established technical execution, while VRM is still moving through the amorphous concept-wrangling stage.

For VRM to get to full-speed-ahead execution, something else needs to happen. And I think that something else is the “verticalisation” of opensource. The good news is that it’s begun to happen.

Views?

Musing about “commercial” development of Linux

There’s an interesting study of Linux kernel development that’s been doing the rounds recently. Published this month by the Linux Foundation, it makes for fascinating reading.

While it concentrates on the kernel itself, the report is still exhaustive:

  • Covers a period of over 3 years
  • Spread over 14 kernel releases
  • Relating to 3621 lines added, 1550 lines removed and 1425 lines changed
  • Forming the output of over 3600 developers from over 270 companies

Some of the key takeaways include:

  • The individual development community has doubled in the period under review
  • The top 10 individual developers accounted for over 15% of the output
  • The top 30 individual developers accounted for around 30%
  • The top 10 contributing “groups”, including companies, account for over 75% of the output

But the most important assertion made by the report is the following:

Over 70% of kernel development is demonstrably done by developers who are being paid for their work

And you know something? I agree with a lot of the report; there is a lot that I have learnt from the report. But that one conclusion, that nearly three-fourths of development is carried out by paid developers, that doesn’t quite sit well with me.

I could be way off beam here, but my hunch is that the conclusion could be wrong. Why? I think it has something to do with the Because Effect.

Yes, the developers are paid. Yes, they are paid for their work. What I am less sure of is that the work they get paid for is the work that contributes to the kernel. Over the years I have been in many situations where developers have asked me whether they can contribute to opensource projects, but much of it has had to do with things like opening ports.

I am sure that a very high percentage of the output (in the Linux kernel, over the last three years) has come from employees of commercial organisations. But my gut feel is that these developers contributed the effort and the code because it made their jobs easier, because their contributions helped them solve problems, rather than because they were directed to complete “assignments” or “work packages” related to the kernel.

It’s a question of motive.

As more and more firms adopt Linux the community of developers will grow. This is not surprising. As more and more firms adopt Linux there is more of a market for other firms to make money because of Linux rather than with Linux. This is also not surprising. And a small number of firms will actually continue to make money with Linux, if you want to call the sale of distros and support and documentation and training and consultancy a “with” proposition.

As all this happens, the bulk of the growth in consumption of Linux takes place in commercial organisations, so it is not surprising that the bulk of development of the kernel takes place via the efforts of people in those organisations.

My hunch, however, remains. This is not paid work. It is voluntary work done by people who do get paid, paid to do other things.

I may have got this completely wrong, and am happy to be proved wrong. I will learn.

Views?

Musing about artificial scarcity

The Because Effect is all about understanding abundances and scarcities. Any firm that truly understands the abundances and the scarcities of a given economic era is bound to prosper, as Gilder noted many years ago.

Opensource is all about The Because Effect, and is a means of making abundant things that were previously scarce. Abundance that is underpinned by licensing models that prevent artificial scarcity.

Which is why this post by David Wallace is worrying; while I’d heard of a few cases over the years, this example brings it home. While I don’t know all the facts, what is clear is that we need to spruce up our capacity to support and protect opensource contributors. Theory is fast moving into practice here. Doc? Don? Any advice for Dave the LifeKludger?

We have a lot to learn about abundance. How to fund it. How to make sure it stays abundant, how to protect the abundance. How to make money from it without reverting to the corruption of lock-in. I am particularly intrigued by Larry Lessig’s work on CC Zero and CC Plus.

What have you changed your mind about?

That’s the subject of a very powerful set of essays published recently in the Edge World Question Center. I haven’t read all of them yet; I was working through them sequentially when I received an e-mail from Pat Kane of ThePlayEthic, pointing me at the answer given by Kevin Kelly. [Thanks, Pat, and I look forward to meeting you on Thursday.]

I loved it. And I suggest you stop whatever you’re doing and read it, now.

Some tidbits:

Everything I knew about the structure of information convinced me that knowledge would not spontaneously emerge from data, without a lot of energy and intelligence deliberately directed to transforming it. All the attempts at headless collective writing I had been involved with in the past only generated forgettable trash. Why would anything online be any different?

……….

How wrong I was. The success of the Wikipedia keeps surpassing my expectations. Despite the flaws of human nature, it keeps getting better. Both the weakness and virtues of individuals are transformed into common wealth, with a minimum of rules and elites. It turns out that with the right tools it is easier to restore damage text (the revert function on Wikipedia) than to create damage text (vandalism) in the first place, and so the good enough article prospers and continues. With the right tools, it turns out the collaborative community can outpace the same number of ambitious individuals competing.

……….

It has always been clear that collectives amplify power — that is what cities and civilizations are — but what’s been the big surprise for me is how minimal the tools and oversight are needed. The bureaucracy of Wikipedia is relatively so small as to be invisible. It’s the Wiki’s embedded code-based governance, versus manager-based governance that is the real news. Yet the greatest surprise brought by the Wikipedia is that we still don’t know how far this power can go.

……….

It is one of those things impossible in theory, but possible in practice. Once you confront the fact that it works, you have to shift your expectation of what else that is impossible in theory might work in practice.

……….

When you grow up knowing rather than admitting that such a thing as the Wikipedia works; when it is obvious to you that open source software is better; when you are certain that sharing your photos and other data yields more than safeguarding them — then these assumptions will become a platform for a yet more radical embrace of the commonwealth.

……….

Generation M is growing up knowing that Wikipedia works; to Generation M, it is obvious that open source software is better; Generation M, the Multimedia, Multitasking, Mobile Generation, is certain that sharing photos and other data yields more than safeguarding them.

Generation M understands what Kevin Kelly says here:

Both the weakness and virtues of individuals are transformed into common wealth, with a minimum of rules and elites. It turns out that with the right tools it is easier to restore damage text (the revert function on Wikipedia) than to create damage text (vandalism) in the first place, and so the good enough article prospers and continues. With the right tools, it turns out the collaborative community can outpace the same number of ambitious individuals competing.

In other words, for Generation M, or maybe the generation after that, the tragedy of the commons can be overcome, the free rider problem can be overcome, they have seen the promised land: The collaborative community can outpace the same number of ambitious individuals competing.

I have to repeat that. The collaborative community can outpace the same number of ambitious individuals competing.

Read it and weep. With joy. Because it is just possible that future generations may not have to put up with the trash that we have.

Musing about Generation M and valuing IT skills in an opensource world

The kernel for this post was an innocuous article in the BBC online, headlined Computer knowledge “undervalued”. I read it some time ago, and for some reason it felt like I’d just sat on a saddle with a burr under it. Slowly I realised that there was no saddle, but that the burr remained. A big burr.  And I thought to myself, oops. Double oops. Treble oops with cream on top. Why did I think that? Come for a ride with me.

Imagine there was an enterprise. Any enterprise. Now imagine that that particular enterprise had a bunch of people with “computer skills”. Imagine further that the specific “computer skills” these people had were, shall we say, “proprietary” skills.

With me so far? Okay. Now let’s imagine a bunch of consultants coming along and helping said enterprise “value” these “proprietary” skills, and in some convoluted manner “placing” this “value” “on the balance sheet”. [Why would this happen? Because it’s the sort of d^*mfool thing consultants do.]

Oops. Now, with just a tiny bit of legerdemain, the enterprise’s cost of converting from a proprietary world to an opensource world has just gone stratospheric.

More worryingly, at one fell swoop, the opensource and web-savvy skills of Generation M have been made to disappear. [To be precise, the potential value of their skills has been decimated].

You’re right, it couldn’t possibly happen. No enterprise would be crass enough to do that. [RageBoy, you listening?]

And how do we avoid this thing that couldn’t possibly happen? Simple. We must value opensource skills substantially higher than proprietary skills.

Something to think about. When it comes to valuing computer skills, opensource beats proprietary every time. More optionality, less lock-in, more future-proofing and insurance against obsolescence, lower switching costs, easier retraining, the list goes on.

So. Let’s be careful out there.Â