Estimating value of opensource

I came across this Linux Foundation press release via the 451 CAOS Theory blog. Headlined Estimating the Total Development Cost of a Linux Distribution, I had no choice but to read it. And it makes interesting reading.

I gave the report a quick once-over; initial reactions were not good, I was up in arms about a number of things, three in particular. For one thing, the report relies on replacement cost as a basis for valuation; even if I were comfortable with the way that the replacement costs were calculated, I would always less comfortable with the replacement-costs-alone approach to valuation. A second issue, openly admitted to in the report, is to do with the use of  Source Lines of Code (SLOC), and the quantity-not-quality risk that comes with it. And the third issue, also alluded to in the report, is the use of COCOMO (COnstructive COst MOdelling) in an opensource context, coming as it does from strong proprietary tools.

But I decided to set all these reactions aside, and sought to concentrate on what I could learn from the report. Three key things occurred to me:

1. We still haven’t really “got” the Because Effect. When someone says “The Linux operating system is the most popular open source operating system in computing today, representing a $25 billion ecosystem in 2008″, I start worrying. After all, Google alone is worth a tad more than $25 billion, even at today’s prices. When the someone in question is the Linux Foundation, I worry a little more. Google’s valuation is at least in part due to its operating costs being what they are, based on extensive use of opensource software.

2. We still haven’t really “got” global sourcing. Twenty years after the offshore industries began, we’re still using Western proxies for pricing labour, and wrap rates that appear to be based on traditional in-house approaches rather than partnered and offshored models.

3. We still haven’t really “got” the implications of community development. This, despite the work done by people like Eric von Hippel and Yochai Benkler, despite the prodigious outputs of many people in looking at, analysing, reporting on and summarising what’s happening in this field. Opensource is a well-established exemplar of community-based development, and we have to get our heads around the way this is valued, both in enterprises as well across the industry as a whole.

Maybe I shouldn’t have started those three points with “we”. Maybe it’s me. What is clear to me is that I need to learn a lot about estimation and valuing and costing and pricing in a global, community-based, commodity-enabled open platform world.

And studies like the one I just finished reading will help me get there, as I begin to see what works and what doesn’t, what is known, what answers aren’t forthcoming as yet. So thank you Linux Foundation, thank you Amanda McPherson, Brian Proffitt and Ron Hale-Evans. At the very least you’ve given me stuff to critique, stuff that I can point to and say “that works for me, that doesn’t work for me”. But in real fact you’ve given me a lot more, stuff to think about, stuff to work on.

So I will give the report another, slower read, and revert to the authors with comments and questions. Maybe you’d like to do the same.

Musing about tipping points and connectors

I loved this story from a Malcolm Gladwell and Tipping Point perspective. [My thanks to @monkchips for tweeting about it].

Whatever your political persuasion, do take part. There is something for all of us to learn from such experiments, so can I encourage you to participate in what Hjortur is doing? Visit IfTheWorldCouldVote.com and do what you feel you must.

Hjortur’s Australian tale reminded me of some of my early experiences after I started blogging ‘externally”. In those days I had a ClusterMaps plugin, and I could actually tell who was lighting up what. That dot in New Zealand is so-and-so who went back there. That one in the Caribbean is so-and-so on vacation. And so on and so forth.

Soon I expect we will be able to do that with far greater ease than we can imagine today. Especially if we keep taking part in experiments that show us how it all works at a simple yet meaningful level.

Musing gently about traffic and information and buyers and sellers

One of the themes that recurs quite often in the conversations I have is that of the next 50 billion devices. While people argue about the next 3 billion people getting connected, and while people wander around in a strange attitude of believing that the internet is nothing more than an illegal distribution vehicle attacking Hollywood, I get more and more intrigued by how information is becoming “live”.

Take this for example:

It’s a representation of the buses in Bangalore. [My thanks to @abhilash for letting me know via Twitter].

As you hover over the bus icon, you can see its location, bus number, current speed and some sort of unique reference to the vehicle itself. Looking at it, my natural reaction was to want more. Like “how crowded is it?”. Information that is not that easy to collect today, information that will become easier and easier to collect over time. And assimilate. And report. And display. Allowing each of us to make more informed decisions.

I travel around London using a variety of vehicular and ambulatory options: I don’t drive. That does not mean I don’t use a car; when I do use one, someone else has to drive. Comes from never learning how to drive. When I was growing up, if you were rich enough to have a car, you tended to be rich enough to have a driver. And so I didn’t learn then, and what with one thing and the other, I’ve never done so since.

When I travel around London, one of the things that amazes me is the apparent emptiness of the bendy buses; I’ve assumed that it is because of the time I travel, which tends to be early or late rather than peak. But then I think to myself: if I was running the bus network, would it not be possible to be more “just-in-time” with the whole thing? I know that buses do get rescheduled and repurposed, but I sense that everything could be more efficient if there was better information given to the people who decide.

Today it is not just about information to the centre but also to the passengers. As they, the passengers, make more and more informed decisions, everything should work better.

So there’s a metamorphosis going on. Stage one is where information is “automatically” collected and passed to the “centre”, allowing apparently better supply decisions; stage two is when this information is also passed to the edge, allowing the demand side to operate more effectively. But this is the static web, this is classic web.

What excites me is stage three, when the demand side can signal its intentions to the supply side cheaply and accurately and dynamically. And the supply side can respond cost-effectively. This has not yet happened, but the early signs are there. P2P collection of information on-the-fly; an extreme case of such information is the intention or signal.

There’s been talk of the Intention Economy for quite some time now, the issue is more about how to make it happen. The VRM movement has been working on this for a while now. As the power to collect and provide relevant information moves from the core to the edge, we will see this happen more and more. That is what the promise of the participative web is all about. The power of VRM, the power of the intention economy, all these rely on the ability of the edge to provide better information about tomorrow.

We have spent too long dealing with better information about yesterday; we have to get more and more involved in a world where we have better information about tomorrow.

Learning about why people don’t adopt opensource

I’ve been consistently intrigued by the reasons people give for not using opensource, and by the vehemence and passion generated by all concerned. [Don’t you find it amazing that from the very start, the word “opensource” has conjured up images of long-haired pinko lefty tree-huggers in tie/dye t-shirts with the compulsory cigarette-floating-in-coffee-cup? What a feat of marketing by incumbent vendors.]

Over the last decade or so, I’d formed my own opinions as to why people refused to adopt opensource, largely based on observing what I saw around me. Anecdote and hearsay, even if underpinned by experience, doth not a formal study make, but for what it’s worth, I’ll share them here.

People don’t use opensource for one (or more) of seven reasons:

  1. They hate the principle. Such people are uncomfortable with the concept of opensource, they tend to get hung up with the free-as-in-gratis rather than the free-as-in-freedom, and they feel that somehow the very nature of their existence gets undermined by the use of opensource. It’s unAmerican, it’s McCarthyist, it’s even (hush your mouth) Communist. And don’t you know it’s already illegal in Alaska? Where will the world go to if everyone started using free things? Opensource users are stealing from the mouths of people who work hard everywhere. The very idea! These people are hard to convince, but when convinced experience Road-To-Damascus moments. Work on them, it will pay off.
  2. They believe it’s insecure. [Again, a wonderful feat of marketing, excellent management of the metaphors and anchors and frames around opensource.] Quite a common response. Code that everyone can use, that anyone can change, that no one owns? Open to inspection by all? How on earth could that possibly be secure? It’s all a plot to bring down the capitalist world as we know knew it. Solvable by education.
  3. They’re out of their comfort zone. This tends to be the response of steady-state professionals in IT departments in many organisations. If it works, why try and fix it? Why force yourself to take responsibility for the integration, deployment and support of something, when you can pay someone else to take care of it all? They’re risk-averse and responsibility-shy; understandable, defensible, this can often be solved by education.
  4. They know a better way. These are people who point to the end-to-end control that Apple/Microsoft has, and how that gives people more choice and a better experience. [Yes, I’ve always wanted to drive my car on railtracks, ensure that the wheels fit precisely on the tracks, and go by car only to the places the railway takes me. ?!?] Solvable by education.
  5. They don’t know about it. These people have been cocooned away so effectively that they aren’t even aware of the options they have. Totalitarian rule. Most probably they aren’t allowed to go on to that dangerous place, the internet, where they might see strange places and maybe even catch exotic diseases. If they do have connectivity, it’s locked down to a small number of cleared sites. Mozilla is definitely not one of them, and even Sun is banned. Solvable by education.
  6. They can’t do what they want with it. To me, this is one of the most understandable objections. They use something that’s proprietary, they’ve built a whole pile of things around the proprietary thing, and now they can’t function without it. It’s hard to replicate elsewhere or using anything else. It’s not just the applications, you have to think about the processes, the training, everything. I almost buy this. Almost. But all you need to do is imagine you are in a merger or takeover, and all this changes. There is an imperative to move, and all the excuses disappear. So while I have sympathy for this view, I am aware of how fragile it really is. The best way to solve this one is to simulate a merger or takeover involving a firm that does not use what you’re using.
  7. The move represents serious operational risk. Puh-leese. Find the remaining deckchairs on the Titanic, and get them on it. They will happily move them around until iceberg time.

The out-of-comfort-zone concept is well described here, by chuqui, in a post written exactly two years ago. I guess for many of you all this is too anecdotal, too ephemeral. What you hanker after is facts. Good solid academic research on why people don’t use opensource.

This is your lucky day, because that’s precisely what this post is leading on to. There’s an intriguing article on the subject in the latest issue of First Monday, my favourite peer-reviewed webzine. Here it is:

Reasons for the non-adoption of OpenOffice.org in a data-intensive public administration

The study makes a number of general yet interesting points, amongst them:

  • the likelihood of pro-innovation bias in innovation studies
  • the fact that most studies focus on the adoption of innovation rather than reasons for not doing so
  • the understanding that non-adoption is not the mirror image of adoption.

The meat of the study is really worth getting into. The authors looked at a case study around the Belgian Federal Public Service Economy, a public unit that looked at OpenOffice but then decided to stay with Microsoft Office as their principal office toolset. Interestingly,

….the organisation opted for a hybrid approach, in which OpenOffice.org is installed on users’  workstations as a document convertor. This ensures that users can correctly open ODF documents on their workstations. OpenOffice.org is, however, not supported by the IT department.

So the “organisation” went for a solution that is, at least in part,  “not supported by the IT department”. The plot thickens.

It’s a very interesting case study. There were three key projects:

  • introduction of a target platform for business critical application development
  • selection of a platform for business intelligence
  • standardisation of software offering Office-style functionality

Everything was set up right for the decision to go opensource. The European Commission had mandated that an ISO standard had to be used for exchanging documents by September 2009, and Open Document Format (ODF) was the only approved ISO standard. Belgian public sector companies were under pressure to save costs, and this increased the bias towards OpenOffice. And the manager in charge was a known sympathiser.

Just in case this wasn’t enough, the FPS Justice and the Brussels Public Administration, two similar public sector organisations in Brussels, had just opted for OpenOffice.

So let me repeat. Public sector organisation. In Brussels, the heartland of European bureaucracy. Needing to reduce costs. Needing to move to ODF. Led by a sympathiser. Surrounded by OpenOffice adopters.

With me so far? I guess so. Until I tell you what they did. They went for Microsoft Office. With the ODF plugin developed by Sun.

As I said, interesting case study.

Three things stood out for me. One, the decision making process appeared flawed. Project 2, the decision to go for a specific business intelligence platform, was “guided by the fact that [the platform] offers powerful integration with Microsoft Office”. How could this decision be taken before the decision to choose between OpenOffice and Microsoft Office?

Two, the decision appeared to be driven by heavy users rather than the regular users. The heavy users were the ones who carried out serious data-intensive activities, and had built a plethora of tools using the development platform around Microsoft Office. These tools were hard to price in terms of migration costs, and there was a lot of fear and doubt related to conversion and compatibility in general.

Three, no detailed TCO analysis had been made. I quote:

It should be noted, however, that some factors obscured the actual level of [these] potential cost savings. First, some of the licences for Microsoft Office had already been purchased, and were considered to be sunk costs by the FPS Economy. Second, our informants indicated that the TCO for OpenOffice.org could not be estimated precisely, due to the uncertainty regarding the cost of the conversion of applications and macros. Hence, during the project, no detailed TCO analysis was made.

But you know what? All that pales into insignificance when you read the next line:

This is consistent with the results of previous studies that showed that organisations found it difficult to assess the TCO of OpenOffice.org, even after having performed the migration (COSPA, 2005; Drozdik, etal, 2005, Russo, et al, 2003; Ven, et al, 2007a, b; Wichmann, 2002)

Wow. People have carried out studies that prove that it is hard to work out the TCO for OpenOffice.org. Hmmmm. Anyone have meaningful TCOs for the alternatives?

Invented here

Times are hard. And when times are hard, every firm has four choices:

  • Stop Doing Something Completely
  • Continue to Do Something, Just Less of It
  • Start Doing Something New
  • Do Something Differently

Stopping doing something completely is hard. Each firm is made up of values and relationships, habits and processes, people and culture, all built around their specific products and services and territories and markets. Firms that are global in scale and reach tend to operate shared-service models and matrix structures; it becomes considerably difficult to work out the precise impact of exiting a product or service or territory or market.

Accounting and related decision support systems tend to be built around jurisdictions and territories, using regional conventions or, at best, regional interpretations of global conventions; employee regulations and rights also follow fault-lines that are regional; the healthy contention envisaged in the matrix turns into open conflict. Presentations and spreadsheets circulate like snake-oil, accompanied by the requisite snake-oil salesmen.

In such an environment, good information is hard to come by, and as a result good decisions are hard to make. A few do get made. But for the most part, companies tend not to exit products, services, territories and markets in entirety. The things that do get stopped tend to fall into the category Yesterday’s-Necessities-Just-Became-Today’s-Luxuries, things that can be stopped despite the escalation of matrix war.

As a result, instead of Stopping Doing Something, many firms go for the Continue to Do Something, Just Less of It Option. Because Doing Less of Something is Easy. Targets get handed out, “haircuts” are cascaded down, and Death by A Thousand Cuts becomes the norm.

My sense is that the haircut is a losing strategy, it is just a way of improving the optics while staving off the inevitable for a short period. Haircuts are intrinsically the lazy man’s response, management-by-spreadsheet. There tends to be no real management involved (and even less leadership). Why do I say this? Reducing budgets and targets and tolerances is a reasonable thing to do, provided there is no cascade of the reduction. For it is in the cascade that a dereliction of duty occurs, an abdication of responsibility. Cuts are fine, haircuts are not. Adjusted targets need to be held at the highest practicable level in the organisation, and every attempt to cascade should be met with severe resistance.

Having said that, cutbacks are reasonable things to do, especially in environments where payroll and SG&A are the two biggest discretionary expense categories. Travel and entertainment are the traditional early targets, and this is a good thing. What is less good is the tendency to cut back on training and on graduate hiring, these can be short-sighted.

It was John Maynard Keynes who said:

For the Engine which drives Enterprise is not Thrift but Profit

So if you have to resort to cutbacks, try and remember Keynes’ words. Otherwise you may land up saving a lot of “cost” but, in the process, losing the firm.

Starting Doing Something New is also hard to do. In most firms, there is some concept of capacity planning, however rudimentary or makeshift. And firms tend to operate at perceived full employment. So in order to do something new, you have to stop doing something old. The immune system of the firm tends to be at the enterprise equivalent of DEFCON Five by this time, making it hard to stop things “organically”. As discussed earlier, making strategic decisions to stop doing something is also hard.

Having a separate focus on invention and creativity can solve this problem, but that too is hard to do. There are alienation, isolation and ivory-tower-thinking risks.

Which brings me on to Doing Things Differently. Covering a multitude of options, ranging from outsourcing and offshoring to the introduction of real innovation within the firm. Which leads to the Immune System response of Not Invented Here. I am reminded of an article written by Joel Spolsky maybe seven years ago, headlined In Defense of Not-Invented-Here Syndrome. It’s well worth reading. As is, coincidentally, Jeff Atwood’s recent take on the same issue, which can be found here.

Both Joel and Jeff make the point that whatever you believe your core business function to be, you should do it yourself. Deciding what is core and what is non-core is a hard thing to do, especially in an organisation with shared-service models. One man’s meat is another man’s poison, one man’s ceiling is another man’s floor. And one man’s core is another man’s non-core.

I would rather paraphrase what Spolsky and Atwood said: Figure out what you’re good at. Check that there is a market for what you’re good at, that people want to pay you for doing it. Then make sure you do it. From your perspective, everything else should be non-core. So get others to do the rest, focus very hard on what you’re good at.

Which brings me back to my old opensource rule-of-thumb:

  • If the problem is universal, look to the opensource community for a solution
  • If the problem is domain-specific, look to the “commercial” community
  • If the problem is unique to your firm, look to your own resources

I think that every CIO should be looking hard at the tools being used to solve universal problems, and to make sure that opensource components are used aggressively. There is no better time.

Remember, the objective is to reduce costs, not heads. Given the option, what would you rather do? Fire people or increase your use of opensource? Think about it.