Martha and Dank Redux: Thinking about Cloud Computing

It must be nearly ten years since I first read Larry Lessig’s Code and Other Laws of Cyberspace. When I read it, I remember being very taken with one of his early stories, that of Martha and Dank. Here’s an excerpted version:

It was a very ordinary dispute, this argument between Martha Jones and her neighbours. […..]. The argument was about borders — about where her land stopped. […..]. Martha grew flowers. Not just any flowers, but flowers with an odd sort of power. They were beautiful flowers, and their scent entranced. But, however beautiful, these flowers were also poisonous. […..]. The start of the argument was predictable enough. Martha’s neighbour, Dank, had a dog. Dank’s dog died. The dog died because it had eaten a petal from one of Martha’s flowers. […..]. “There is no reason to grow deadly flowers,” Dank yelled across the fence. “There’s no reason to get so upset about a few dead dogs,” Martha replied. “A dog can always be replaced. And anyway, why have a dog that suffers when dying? Get yourself a pain-free-death dog, and my petals will cause no harm.”

If you haven’t done so already, you should read the whole book. There’s an updated version available.

I read the original book during the heady days of 1999, and the lessons have stayed with me. Too theoretical for you? Take a look at this story from a few days ago: Woman in jail over virtual murder.

Let’s take a look at what actually happened. The woman was playing Maplestory, a massively multiplayer online role-playing game or MMORPG. As was the case with Dank in Lessig’s story, something happened to her in cyberspace. Her Maplestory virtual “husband” divorced her. And, just like Lessig prophesied with Dank, she was angry.

“I was suddenly divorced, without a word of warning. That made me so angry,” she was quoted by the official as telling investigators.

So what did she do? She took her revenge. She somehow got the log-in details of the man playing her virtual ex-husband, logged in as him, went into Maplestory and killed off her ex-husband character.

Yes, she took her revenge. In cyberspace. Revenge for an event that took place in cyberspace in the first place.

But. And here’s where it gets interesting, pretty much like Lessig wrote. The critical trigger event, the “divorce”, took place in Maplestory. The response, the “revenge”, took place in Maplestory. But the consequences of that revenge are taking place not in cyberspace, but in good ol’ bricks-and-mortar land.

The woman, a piano teacher, is now in a real jail. Bricks and mortar, with a bunch of metal bars thrown in for free. In jail. Today.

What’s this got to do with cloud computing? Maybe nothing. But here’s the way I look at it:

Until recently, we’ve thought we’ve been able to keep the real and virtual worlds distinct and separate. But we were wrong. They’ve already merged, the deed is done. We live in a hybrid world. The Maplestory incident is not unique. Leaving aside the world of MMORPG, there’s been similar convergence in the social networking worlds. At the extreme, we’ve even had a number of cases where people have killed their partners after learning something about them via social networking site. Examples are here and, more recently, here. Here the trigger events were in cyberspace, but the tragedies took place in real life. Real tragedies affecting real flesh-and-blood families, let us not forget that.

The merged world is here. Today. A merger of the “atoms” world and the “bits” world. But in this merged world, the laws that we have still pertain to atoms alone rather than to bits. Physical location is a key factor in determining jurisdiction and in referring to or selecting relevant legislation.

Today’s Economist has a 14-page special report on “Corporate IT”, which brings some of this up, particularly in the contexts of confidentiality, privacy, obscenity, hate crimes and libel. Data centres are located somewhere. Physically located somewhere. Cloud services, on the other hand, while fuelled by data centres, come by definition from ‘the cloud”. Anywhere. Everywhere.

We have a hybrid world, but without the right hybrid laws. To make matters worse, I think there’s a bigger problem looming. For decades the software industry has been privileged to be protected on one key issue: consequential loss. That might have been fine when software was software and services were services.

But not now. Not now, when software is delivered as a service. Not now when customers buy the service rather than the software. Think about it this way. Let’s say you run a shop, you rent premises and infrastructural services from a “landlord” within a shopping mall. And let’s say you were denied access to your shop for a while, or at the very least denied some basic services you were contracted to receive, like power for example. You’d have a pretty good claim on the landlord or the mall operator for putative losses sustained as a result of their non-delivery.

We have a fascinating time ahead of us.

Software being delivered as a service, in an environment where virtual and physical worlds are colliding and converging, all against a backdrop of cloud services. Three significant dimensions of change, to be managed by laws that aren’t really fit for purpose for any one of the changes; all happening at a time when things are, shall we say, “delicate”, in the world of commerce. At least that’s the way it seems to me; I would love to be corrected.

We have a fascinating time ahead of us.

Happier Days

Remember Happy Days? Here’s a video that’s worth seeing (if you haven’t already), courtesy of Funny Or Die, which is a site worth bookmarking. Ron Howard making an appearance with Henry Winkler, the Fonz himself. Depending on your political persuasion, you’re going to love it or hate it. The good thing about such viral approaches is that you get to give that feedback by voting on the video. And it’s the continuing shape of things to come.

Estimating value of opensource

I came across this Linux Foundation press release via the 451 CAOS Theory blog. Headlined Estimating the Total Development Cost of a Linux Distribution, I had no choice but to read it. And it makes interesting reading.

I gave the report a quick once-over; initial reactions were not good, I was up in arms about a number of things, three in particular. For one thing, the report relies on replacement cost as a basis for valuation; even if I were comfortable with the way that the replacement costs were calculated, I would always less comfortable with the replacement-costs-alone approach to valuation. A second issue, openly admitted to in the report, is to do with the use of  Source Lines of Code (SLOC), and the quantity-not-quality risk that comes with it. And the third issue, also alluded to in the report, is the use of COCOMO (COnstructive COst MOdelling) in an opensource context, coming as it does from strong proprietary tools.

But I decided to set all these reactions aside, and sought to concentrate on what I could learn from the report. Three key things occurred to me:

1. We still haven’t really “got” the Because Effect. When someone says “The Linux operating system is the most popular open source operating system in computing today, representing a $25 billion ecosystem in 2008″, I start worrying. After all, Google alone is worth a tad more than $25 billion, even at today’s prices. When the someone in question is the Linux Foundation, I worry a little more. Google’s valuation is at least in part due to its operating costs being what they are, based on extensive use of opensource software.

2. We still haven’t really “got” global sourcing. Twenty years after the offshore industries began, we’re still using Western proxies for pricing labour, and wrap rates that appear to be based on traditional in-house approaches rather than partnered and offshored models.

3. We still haven’t really “got” the implications of community development. This, despite the work done by people like Eric von Hippel and Yochai Benkler, despite the prodigious outputs of many people in looking at, analysing, reporting on and summarising what’s happening in this field. Opensource is a well-established exemplar of community-based development, and we have to get our heads around the way this is valued, both in enterprises as well across the industry as a whole.

Maybe I shouldn’t have started those three points with “we”. Maybe it’s me. What is clear to me is that I need to learn a lot about estimation and valuing and costing and pricing in a global, community-based, commodity-enabled open platform world.

And studies like the one I just finished reading will help me get there, as I begin to see what works and what doesn’t, what is known, what answers aren’t forthcoming as yet. So thank you Linux Foundation, thank you Amanda McPherson, Brian Proffitt and Ron Hale-Evans. At the very least you’ve given me stuff to critique, stuff that I can point to and say “that works for me, that doesn’t work for me”. But in real fact you’ve given me a lot more, stuff to think about, stuff to work on.

So I will give the report another, slower read, and revert to the authors with comments and questions. Maybe you’d like to do the same.

Musing about tipping points and connectors

I loved this story from a Malcolm Gladwell and Tipping Point perspective. [My thanks to @monkchips for tweeting about it].

Whatever your political persuasion, do take part. There is something for all of us to learn from such experiments, so can I encourage you to participate in what Hjortur is doing? Visit IfTheWorldCouldVote.com and do what you feel you must.

Hjortur’s Australian tale reminded me of some of my early experiences after I started blogging ‘externally”. In those days I had a ClusterMaps plugin, and I could actually tell who was lighting up what. That dot in New Zealand is so-and-so who went back there. That one in the Caribbean is so-and-so on vacation. And so on and so forth.

Soon I expect we will be able to do that with far greater ease than we can imagine today. Especially if we keep taking part in experiments that show us how it all works at a simple yet meaningful level.

Musing gently about traffic and information and buyers and sellers

One of the themes that recurs quite often in the conversations I have is that of the next 50 billion devices. While people argue about the next 3 billion people getting connected, and while people wander around in a strange attitude of believing that the internet is nothing more than an illegal distribution vehicle attacking Hollywood, I get more and more intrigued by how information is becoming “live”.

Take this for example:

It’s a representation of the buses in Bangalore. [My thanks to @abhilash for letting me know via Twitter].

As you hover over the bus icon, you can see its location, bus number, current speed and some sort of unique reference to the vehicle itself. Looking at it, my natural reaction was to want more. Like “how crowded is it?”. Information that is not that easy to collect today, information that will become easier and easier to collect over time. And assimilate. And report. And display. Allowing each of us to make more informed decisions.

I travel around London using a variety of vehicular and ambulatory options: I don’t drive. That does not mean I don’t use a car; when I do use one, someone else has to drive. Comes from never learning how to drive. When I was growing up, if you were rich enough to have a car, you tended to be rich enough to have a driver. And so I didn’t learn then, and what with one thing and the other, I’ve never done so since.

When I travel around London, one of the things that amazes me is the apparent emptiness of the bendy buses; I’ve assumed that it is because of the time I travel, which tends to be early or late rather than peak. But then I think to myself: if I was running the bus network, would it not be possible to be more “just-in-time” with the whole thing? I know that buses do get rescheduled and repurposed, but I sense that everything could be more efficient if there was better information given to the people who decide.

Today it is not just about information to the centre but also to the passengers. As they, the passengers, make more and more informed decisions, everything should work better.

So there’s a metamorphosis going on. Stage one is where information is “automatically” collected and passed to the “centre”, allowing apparently better supply decisions; stage two is when this information is also passed to the edge, allowing the demand side to operate more effectively. But this is the static web, this is classic web.

What excites me is stage three, when the demand side can signal its intentions to the supply side cheaply and accurately and dynamically. And the supply side can respond cost-effectively. This has not yet happened, but the early signs are there. P2P collection of information on-the-fly; an extreme case of such information is the intention or signal.

There’s been talk of the Intention Economy for quite some time now, the issue is more about how to make it happen. The VRM movement has been working on this for a while now. As the power to collect and provide relevant information moves from the core to the edge, we will see this happen more and more. That is what the promise of the participative web is all about. The power of VRM, the power of the intention economy, all these rely on the ability of the edge to provide better information about tomorrow.

We have spent too long dealing with better information about yesterday; we have to get more and more involved in a world where we have better information about tomorrow.