Prime real estate

I was looking through the latest issue of Edge, where Daniel Kahneman, the 2002 Nobel Laureate for Economics,  was having a conversation with a bunch of luminaries that included Richard Thaler, Nathan Myrhvold, Elon Musk and Sendhil Mullainathan.

There’s a short video and some transcripts, followed by annotations and commentaries. Worth a look, as Kahneman runs his “masterclass” in behavioural economics, with a particular accent on “priming”. Important stuff, even if you disagree, in the run-up to the election.

So here it is. My comments will follow, I haven’t quite digested all of it.

Martha and Dank Redux: Thinking about Cloud Computing

It must be nearly ten years since I first read Larry Lessig’s Code and Other Laws of Cyberspace. When I read it, I remember being very taken with one of his early stories, that of Martha and Dank. Here’s an excerpted version:

It was a very ordinary dispute, this argument between Martha Jones and her neighbours. […..]. The argument was about borders — about where her land stopped. […..]. Martha grew flowers. Not just any flowers, but flowers with an odd sort of power. They were beautiful flowers, and their scent entranced. But, however beautiful, these flowers were also poisonous. […..]. The start of the argument was predictable enough. Martha’s neighbour, Dank, had a dog. Dank’s dog died. The dog died because it had eaten a petal from one of Martha’s flowers. […..]. “There is no reason to grow deadly flowers,” Dank yelled across the fence. “There’s no reason to get so upset about a few dead dogs,” Martha replied. “A dog can always be replaced. And anyway, why have a dog that suffers when dying? Get yourself a pain-free-death dog, and my petals will cause no harm.”

If you haven’t done so already, you should read the whole book. There’s an updated version available.

I read the original book during the heady days of 1999, and the lessons have stayed with me. Too theoretical for you? Take a look at this story from a few days ago: Woman in jail over virtual murder.

Let’s take a look at what actually happened. The woman was playing Maplestory, a massively multiplayer online role-playing game or MMORPG. As was the case with Dank in Lessig’s story, something happened to her in cyberspace. Her Maplestory virtual “husband” divorced her. And, just like Lessig prophesied with Dank, she was angry.

“I was suddenly divorced, without a word of warning. That made me so angry,” she was quoted by the official as telling investigators.

So what did she do? She took her revenge. She somehow got the log-in details of the man playing her virtual ex-husband, logged in as him, went into Maplestory and killed off her ex-husband character.

Yes, she took her revenge. In cyberspace. Revenge for an event that took place in cyberspace in the first place.

But. And here’s where it gets interesting, pretty much like Lessig wrote. The critical trigger event, the “divorce”, took place in Maplestory. The response, the “revenge”, took place in Maplestory. But the consequences of that revenge are taking place not in cyberspace, but in good ol’ bricks-and-mortar land.

The woman, a piano teacher, is now in a real jail. Bricks and mortar, with a bunch of metal bars thrown in for free. In jail. Today.

What’s this got to do with cloud computing? Maybe nothing. But here’s the way I look at it:

Until recently, we’ve thought we’ve been able to keep the real and virtual worlds distinct and separate. But we were wrong. They’ve already merged, the deed is done. We live in a hybrid world. The Maplestory incident is not unique. Leaving aside the world of MMORPG, there’s been similar convergence in the social networking worlds. At the extreme, we’ve even had a number of cases where people have killed their partners after learning something about them via social networking site. Examples are here and, more recently, here. Here the trigger events were in cyberspace, but the tragedies took place in real life. Real tragedies affecting real flesh-and-blood families, let us not forget that.

The merged world is here. Today. A merger of the “atoms” world and the “bits” world. But in this merged world, the laws that we have still pertain to atoms alone rather than to bits. Physical location is a key factor in determining jurisdiction and in referring to or selecting relevant legislation.

Today’s Economist has a 14-page special report on “Corporate IT”, which brings some of this up, particularly in the contexts of confidentiality, privacy, obscenity, hate crimes and libel. Data centres are located somewhere. Physically located somewhere. Cloud services, on the other hand, while fuelled by data centres, come by definition from ‘the cloud”. Anywhere. Everywhere.

We have a hybrid world, but without the right hybrid laws. To make matters worse, I think there’s a bigger problem looming. For decades the software industry has been privileged to be protected on one key issue: consequential loss. That might have been fine when software was software and services were services.

But not now. Not now, when software is delivered as a service. Not now when customers buy the service rather than the software. Think about it this way. Let’s say you run a shop, you rent premises and infrastructural services from a “landlord” within a shopping mall. And let’s say you were denied access to your shop for a while, or at the very least denied some basic services you were contracted to receive, like power for example. You’d have a pretty good claim on the landlord or the mall operator for putative losses sustained as a result of their non-delivery.

We have a fascinating time ahead of us.

Software being delivered as a service, in an environment where virtual and physical worlds are colliding and converging, all against a backdrop of cloud services. Three significant dimensions of change, to be managed by laws that aren’t really fit for purpose for any one of the changes; all happening at a time when things are, shall we say, “delicate”, in the world of commerce. At least that’s the way it seems to me; I would love to be corrected.

We have a fascinating time ahead of us.

Musing about tipping points and connectors

I loved this story from a Malcolm Gladwell and Tipping Point perspective. [My thanks to @monkchips for tweeting about it].

Whatever your political persuasion, do take part. There is something for all of us to learn from such experiments, so can I encourage you to participate in what Hjortur is doing? Visit IfTheWorldCouldVote.com and do what you feel you must.

Hjortur’s Australian tale reminded me of some of my early experiences after I started blogging ‘externally”. In those days I had a ClusterMaps plugin, and I could actually tell who was lighting up what. That dot in New Zealand is so-and-so who went back there. That one in the Caribbean is so-and-so on vacation. And so on and so forth.

Soon I expect we will be able to do that with far greater ease than we can imagine today. Especially if we keep taking part in experiments that show us how it all works at a simple yet meaningful level.

Musing gently about traffic and information and buyers and sellers

One of the themes that recurs quite often in the conversations I have is that of the next 50 billion devices. While people argue about the next 3 billion people getting connected, and while people wander around in a strange attitude of believing that the internet is nothing more than an illegal distribution vehicle attacking Hollywood, I get more and more intrigued by how information is becoming “live”.

Take this for example:

It’s a representation of the buses in Bangalore. [My thanks to @abhilash for letting me know via Twitter].

As you hover over the bus icon, you can see its location, bus number, current speed and some sort of unique reference to the vehicle itself. Looking at it, my natural reaction was to want more. Like “how crowded is it?”. Information that is not that easy to collect today, information that will become easier and easier to collect over time. And assimilate. And report. And display. Allowing each of us to make more informed decisions.

I travel around London using a variety of vehicular and ambulatory options: I don’t drive. That does not mean I don’t use a car; when I do use one, someone else has to drive. Comes from never learning how to drive. When I was growing up, if you were rich enough to have a car, you tended to be rich enough to have a driver. And so I didn’t learn then, and what with one thing and the other, I’ve never done so since.

When I travel around London, one of the things that amazes me is the apparent emptiness of the bendy buses; I’ve assumed that it is because of the time I travel, which tends to be early or late rather than peak. But then I think to myself: if I was running the bus network, would it not be possible to be more “just-in-time” with the whole thing? I know that buses do get rescheduled and repurposed, but I sense that everything could be more efficient if there was better information given to the people who decide.

Today it is not just about information to the centre but also to the passengers. As they, the passengers, make more and more informed decisions, everything should work better.

So there’s a metamorphosis going on. Stage one is where information is “automatically” collected and passed to the “centre”, allowing apparently better supply decisions; stage two is when this information is also passed to the edge, allowing the demand side to operate more effectively. But this is the static web, this is classic web.

What excites me is stage three, when the demand side can signal its intentions to the supply side cheaply and accurately and dynamically. And the supply side can respond cost-effectively. This has not yet happened, but the early signs are there. P2P collection of information on-the-fly; an extreme case of such information is the intention or signal.

There’s been talk of the Intention Economy for quite some time now, the issue is more about how to make it happen. The VRM movement has been working on this for a while now. As the power to collect and provide relevant information moves from the core to the edge, we will see this happen more and more. That is what the promise of the participative web is all about. The power of VRM, the power of the intention economy, all these rely on the ability of the edge to provide better information about tomorrow.

We have spent too long dealing with better information about yesterday; we have to get more and more involved in a world where we have better information about tomorrow.

Learning about why people don’t adopt opensource

I’ve been consistently intrigued by the reasons people give for not using opensource, and by the vehemence and passion generated by all concerned. [Don’t you find it amazing that from the very start, the word “opensource” has conjured up images of long-haired pinko lefty tree-huggers in tie/dye t-shirts with the compulsory cigarette-floating-in-coffee-cup? What a feat of marketing by incumbent vendors.]

Over the last decade or so, I’d formed my own opinions as to why people refused to adopt opensource, largely based on observing what I saw around me. Anecdote and hearsay, even if underpinned by experience, doth not a formal study make, but for what it’s worth, I’ll share them here.

People don’t use opensource for one (or more) of seven reasons:

  1. They hate the principle. Such people are uncomfortable with the concept of opensource, they tend to get hung up with the free-as-in-gratis rather than the free-as-in-freedom, and they feel that somehow the very nature of their existence gets undermined by the use of opensource. It’s unAmerican, it’s McCarthyist, it’s even (hush your mouth) Communist. And don’t you know it’s already illegal in Alaska? Where will the world go to if everyone started using free things? Opensource users are stealing from the mouths of people who work hard everywhere. The very idea! These people are hard to convince, but when convinced experience Road-To-Damascus moments. Work on them, it will pay off.
  2. They believe it’s insecure. [Again, a wonderful feat of marketing, excellent management of the metaphors and anchors and frames around opensource.] Quite a common response. Code that everyone can use, that anyone can change, that no one owns? Open to inspection by all? How on earth could that possibly be secure? It’s all a plot to bring down the capitalist world as we know knew it. Solvable by education.
  3. They’re out of their comfort zone. This tends to be the response of steady-state professionals in IT departments in many organisations. If it works, why try and fix it? Why force yourself to take responsibility for the integration, deployment and support of something, when you can pay someone else to take care of it all? They’re risk-averse and responsibility-shy; understandable, defensible, this can often be solved by education.
  4. They know a better way. These are people who point to the end-to-end control that Apple/Microsoft has, and how that gives people more choice and a better experience. [Yes, I’ve always wanted to drive my car on railtracks, ensure that the wheels fit precisely on the tracks, and go by car only to the places the railway takes me. ?!?] Solvable by education.
  5. They don’t know about it. These people have been cocooned away so effectively that they aren’t even aware of the options they have. Totalitarian rule. Most probably they aren’t allowed to go on to that dangerous place, the internet, where they might see strange places and maybe even catch exotic diseases. If they do have connectivity, it’s locked down to a small number of cleared sites. Mozilla is definitely not one of them, and even Sun is banned. Solvable by education.
  6. They can’t do what they want with it. To me, this is one of the most understandable objections. They use something that’s proprietary, they’ve built a whole pile of things around the proprietary thing, and now they can’t function without it. It’s hard to replicate elsewhere or using anything else. It’s not just the applications, you have to think about the processes, the training, everything. I almost buy this. Almost. But all you need to do is imagine you are in a merger or takeover, and all this changes. There is an imperative to move, and all the excuses disappear. So while I have sympathy for this view, I am aware of how fragile it really is. The best way to solve this one is to simulate a merger or takeover involving a firm that does not use what you’re using.
  7. The move represents serious operational risk. Puh-leese. Find the remaining deckchairs on the Titanic, and get them on it. They will happily move them around until iceberg time.

The out-of-comfort-zone concept is well described here, by chuqui, in a post written exactly two years ago. I guess for many of you all this is too anecdotal, too ephemeral. What you hanker after is facts. Good solid academic research on why people don’t use opensource.

This is your lucky day, because that’s precisely what this post is leading on to. There’s an intriguing article on the subject in the latest issue of First Monday, my favourite peer-reviewed webzine. Here it is:

Reasons for the non-adoption of OpenOffice.org in a data-intensive public administration

The study makes a number of general yet interesting points, amongst them:

  • the likelihood of pro-innovation bias in innovation studies
  • the fact that most studies focus on the adoption of innovation rather than reasons for not doing so
  • the understanding that non-adoption is not the mirror image of adoption.

The meat of the study is really worth getting into. The authors looked at a case study around the Belgian Federal Public Service Economy, a public unit that looked at OpenOffice but then decided to stay with Microsoft Office as their principal office toolset. Interestingly,

….the organisation opted for a hybrid approach, in which OpenOffice.org is installed on users’  workstations as a document convertor. This ensures that users can correctly open ODF documents on their workstations. OpenOffice.org is, however, not supported by the IT department.

So the “organisation” went for a solution that is, at least in part,  “not supported by the IT department”. The plot thickens.

It’s a very interesting case study. There were three key projects:

  • introduction of a target platform for business critical application development
  • selection of a platform for business intelligence
  • standardisation of software offering Office-style functionality

Everything was set up right for the decision to go opensource. The European Commission had mandated that an ISO standard had to be used for exchanging documents by September 2009, and Open Document Format (ODF) was the only approved ISO standard. Belgian public sector companies were under pressure to save costs, and this increased the bias towards OpenOffice. And the manager in charge was a known sympathiser.

Just in case this wasn’t enough, the FPS Justice and the Brussels Public Administration, two similar public sector organisations in Brussels, had just opted for OpenOffice.

So let me repeat. Public sector organisation. In Brussels, the heartland of European bureaucracy. Needing to reduce costs. Needing to move to ODF. Led by a sympathiser. Surrounded by OpenOffice adopters.

With me so far? I guess so. Until I tell you what they did. They went for Microsoft Office. With the ODF plugin developed by Sun.

As I said, interesting case study.

Three things stood out for me. One, the decision making process appeared flawed. Project 2, the decision to go for a specific business intelligence platform, was “guided by the fact that [the platform] offers powerful integration with Microsoft Office”. How could this decision be taken before the decision to choose between OpenOffice and Microsoft Office?

Two, the decision appeared to be driven by heavy users rather than the regular users. The heavy users were the ones who carried out serious data-intensive activities, and had built a plethora of tools using the development platform around Microsoft Office. These tools were hard to price in terms of migration costs, and there was a lot of fear and doubt related to conversion and compatibility in general.

Three, no detailed TCO analysis had been made. I quote:

It should be noted, however, that some factors obscured the actual level of [these] potential cost savings. First, some of the licences for Microsoft Office had already been purchased, and were considered to be sunk costs by the FPS Economy. Second, our informants indicated that the TCO for OpenOffice.org could not be estimated precisely, due to the uncertainty regarding the cost of the conversion of applications and macros. Hence, during the project, no detailed TCO analysis was made.

But you know what? All that pales into insignificance when you read the next line:

This is consistent with the results of previous studies that showed that organisations found it difficult to assess the TCO of OpenOffice.org, even after having performed the migration (COSPA, 2005; Drozdik, etal, 2005, Russo, et al, 2003; Ven, et al, 2007a, b; Wichmann, 2002)

Wow. People have carried out studies that prove that it is hard to work out the TCO for OpenOffice.org. Hmmmm. Anyone have meaningful TCOs for the alternatives?