Musing about the ROI of IT

Yup, it’s time for another Very Provisional Post.

There’s something I don’t get about IT and ROI. Something fundamental. And that thing is: How can we possibly use the tools of a very old paradigm to solve the problems of a very new paradigm?

I guess this is something I’ve been musing about for fifteen years, after reading Paul Strassmann’s The Business Value of Computing.

I guess this is something I’ve wrestled with every time I’ve had to stand up and be counted during budget rounds at the various institutions I’ve worked in. And I’ve been in many such rounds, particularly since 2001, where the tone of the budget discussion was “Go South, Young Man“. And I wasn’t that young either.

I guess it is what was at the back of my mind when I read Nicholas Carr’s article in the Harvard Business Review in 2003, when I read his book a year later, and even when I spent time discussing various aspects of the issue with Andrew McAfee.

I guess I’m getting stupider as I grow older. You see, what gets me is this:

Ever since I read the Strassmann oeuvre, I’ve watched computing grow more distributed, more networked; I’ve seen a move towards more “enterprise architecture”,  more middleware, more platforms. I’ve watched a substantial increase in complexity.

This increase in complexity manifests itself in many ways:

  • requirements capture has gotten harder as we made the historical silos merge and coalesce
  • estimation has gotten harder, since everything now connects with everything else
  • testing has gotten harder, particularly regression and end-to-end testing
  • delivery has gotten harder and slower as silo spaghetti entangled us
  • fault replication has gotten harder, and as a consequence so has bug-fixing
  • and everything has gotten harder as the enterprise boundaries began to extend and even disappear

As IT professionals, we’ve recognised this and tried to simplify the chessboard, exchanging pawns, pieces and even queens:

  • using component architecture and reuse to speed up delivery
  • using publish-subscribe bus architectures and adapter frameworks to reduce the number of interfaces
  • using time-boxing  to ease requirements gathering
  • using fast iteration models to  make the gathering process more accurate
  • using increasing standardisation and rationalisation to simplify all this
  • using consolidation, virtualisation and service orientation to derive at least a modicum of value out of Moore’s Law during all this
  • using agile methods in general to speed up all of this

I’ve watched all this happen, watched us learn. But.

During all this time, I haven’t really seen changes in the way we account for our IT investments and expenditures. I’ve seen papers about changes, particularly those suggesting a move towards option theory; I’ve seen articles about such changes: I particularly liked the SMR proposition of Big Bets, Options and No-Regrets Moves. I’ve taken part in long arguments about the processes we use to price and value investments in IT.

But, unlike the IT environment during that period, I haven’t really seen changes in the way we measure the ROI of IT. Just 50-year old lipstick on 500 year old pigs. 

This was a problem in 1987. A bigger problem in 1997. And it’s an absolute killer in 2007.

You see, we’ve moved on. There have been various convergences, convergences of standards, of techniques, even of devices. The opensource community has had its effect, commoditising aggressively up the stack.  We’ve seen telephony become software, we’ve seen the disaggregation and reaggregation of hardware, software and services. [Much of my disagreement with Carr is about timing, not direction. ].

Today we have a new challenge. What Doc Searls calls The Because Effect.

In the past, we could claim there was a direct causal relationship between the investments made in IT and the returns, positive or negative. We had siloed systems so we somehow managed to shoe-horn what we did into 15th century mindsets. As everything became more connected, we couldn’t find the causal relationships any more, so we started wondering whether Strassmann et al were right. Yet we knew they couldn’t be, we could sense the productivity gains, the cycle time gains, the quality gains, even if they were later sacrificed. After all, there were many sacrificial altars: vendor lockin, vendor bloat, the politics of projects, the tragedies of e-mail and spreadsheet, the system of professions.

Last week I was at a conference where there was much discussion about agile methods, and the issue of agile-versus-cumbersome-accounting came up. You know something? I’ve yet to work in a place where people were happy with the finance system. Ever. This, despite finance being one of the first places to be “automated”. I don’t wonder why, I know why. Just ask Sig.

Now things will get harder still. The Because Effect is something we live with already. We make money with X because of Y.  X and Y aren’t unknowns we’re solving for. In many cases, Y is a commoditising infrastructure which enables or disables our ability to derive value out of X, the edge application.

Using traditional ROI techniques, we may drive investment away from both X as well as Y over time, as we continue the shoe-horning madness.  That’s why I read what McAfee and Brynjolfsson researched, why I read what Carr researched. Our measurement tools aren’t up to the job. And the consequences could be tragic.

Just musing. And looking forward to the comments and flames.

Where generations meet

My thanks to Chris “RageboyLocke for making sure I saw this “map” of online communities:

Interesting map. You can find the original here.

I looked at it with Saturday morning eyes, and was struck by the following:

One, it represents many generations, but the generations are often isolated. Neither Orla, my 21 year old daughter, nor Isaac, my 15 year old son, would recognise Usenet or for that matter IRC, even if they used bits of what they represented. Hope, my 9 year old youngest, would spend time looking for Stardoll and not find it, much like Wikipedia does in the link I’ve shown.

Two, there are some dogs that aren’t barking, and it is worth considering why. Not seeing a Twitter I can understand, maybe that’s just a function of when the map was done. Maybe Amazon and eBay are seen as old hat communities and therefore stuck away in the Mountains of Web 1.0 somewhere in the icy north, yet I think they deserve individual mention. Surely Bebo is large enough to be on the map? Surely CyWorld needs to be represented with a much larger area? And what do we do with opensource communities like mozilla and WordPress, “ecosystem” communities like netvibes and collections of people like LinkedIn?

Three, we need to understand more about the places where the generations meet. Why that is the case. Facebook and YouTube both span multiple generations now, Flickr and last.fm have that effect as well, but I find places like DeviantArt far more intriguing. What is it about the communities like DeviantArt that they became ageless from the start? What can we learn from them?

Of course there is some presenter bias coming through, in terms of what has been placed where and how (or for that matter why Cory alone gets named). Nevertheless, there’s something about the map I find intensely intriguing, and I will be posting on it later. In the meantime, treat it like any other Saturday morning post. I think there is a lot I can learn about the different characteristics of online networks, why some are called social networking sites and others aren’t, why some seem boringly Web 1.0 and others don’t, why some support many modes of conversation and others don’t. When I get invited to a group like Booligan on Facebook, how do I learn from that? How come no one has posted any textbooks there? Lots of questions about how online communities work. Maybe I need to go sit at the feet of Amy Jo Kim, rather than just read her book again and again. Maybe I need to spend more time understanding Howard Rheingold and Steven Johnson. Or maybe I just keep having these conversations with Doc and Rageboy and David.

There’s a cluetrain running through the map. I need to figure out where it’s going. Observations, comments and pointers welcome.

Opensource, blogging and the Upside of Down

While reading Thomas Homer-Dixon’s The Upside Of Down, [thanks! Kaliya] I was intrigued by his consideration of opensource in a chapter titled Catagenesis, which he defines as “the creative renewal of our technologies, institutions and societies in the aftermath of breakdown”.

I quote sporadically from the chapter:

Scientists have found that complex systems that are highly adaptive….tend to share certain characteristics. First of all, the individual elements that make up the systems….are extraordinarily diverse. Second, the power to make decisions and solve problems isn’t centralised in one place or thing; instead, it’s distributed across the system’s elements….Third and finally, highly adaptive systems are unstable enough to create unexpected innovations but orderly enough to learn from their failures and successes. Systems with these three characteristics stimulate constant experimentation, and they generate a variety of problem-solving strategies.

We’re all familiar with just such a system — the internet, and its subsystem, the World Wide Web. In one respect, humanity is extraordinarily lucky: just when it faces some of the biggest challenges in its history, it has developed a technology that could be the foundation for extremely rapid problem solving on a planetary scale, for radically new forms of democratic decision making, and most fundamentally for the conversation we must have among ourselves to prepare for breakdown. So far, though, we’ve barely tapped this potential. The Internet and Web — rather than becoming powerful instruments of problem solving, adaptation and social inclusion — have simply turned into venues for a screaming cacophony of electronic narcissism.

The situation may be changing.

[……..]

So far, though, opensource approaches have been applied to solving technical problems like the creation of complex software or large databases. Now we urgently need research to see if we can use this kind of problem-solving approach — and the culture of voluntarism that underpins it — to address the ferociously hard social, political and environmental problems discussed in this book.

[……..]

But even if opensource methods can’t give us clear and final solutions to problems that are ultimately rooted in politics, they’re still a powerful way to develop scenarios, experiment with ideas and lay plans in advance of breakdown. And, most important, they can help us build worldwide communities of like-minded people who, in the course of working together on tasks, become bound together by trust and shared values and understandings. Such communities would then be better able to act with common purpose in a moment of contingency and to seize the opportunity for catagenesis.

Thomas Homer-Dixon is a lot more erudite and reasoned and articulate than I am.  His words resonate a lot with me, and help define why I blog. I’m fundamentally a renaissance man, and I can see how a blogging culture, a true conversational and provisional opensourcing of ideas and opinions, how this will help us recognise, review, respond to and recover from the catastrophes that Homer-Dixon refers to.

T’ain’t What You Do, (It’s the Way That You Do It)

…that’s what gets results…….. Sy Oliver, 1939

Two of my favourite academics, Andrew McAfee and Erik Brynjolfsson, recently collaborated to publish a fascinating article on Carr’s Disease. For those who haven’t come across it, Carr’s Disease is a relatively rare condition, cycling the patient through intermittent bouts of selective blindness and 20:20 vision.

Andy and Erik set out to test some intriguing formal theories they had developed, focused on the market share concentration and sales turbulence associated with industries that had a high IT intensity and associated investment. If the data supported the theories, they would have formal evidence of the difference in the competitive dynamics between low-, medium- and high-IT-intensity firms, thereby proving the psychosomatic nature of Carr’s Disease.

I quote from the article:

The link between IT and competition surprises many researchers and executives for two reasons. First, most companies buy technology to gain control over their environments, not lose it. Enterprise systems help companies create consistency and reduce randomness, so it’s ironic that a high level of such investment would be associated with a more frenzied competitive environment.

[Image]

Second, some observers have argued that information technology is so pervasive that it no longer offers companies any big advantage. If many businesses in the same industry bought the same type of large-scale commercial-enterprise software, there is reason to believe they would subsequently become more similar, and the competitive field would level. Instead, something close to the opposite has taken place.

Fascinating. Read the article for yourself, and do follow the dialogue at Andy’s blog. Here’s an excerpt:

We reasoned that since IT spending increased substantially beginning in the mid 1990s, and since not all industries spend equally on technology, one good way to assess IT’s competitive impact would be to see if there was a difference in differences: if industries that spent a lot on IT (‘high IT’ industries) experienced different competitive dynamics after the mid 1990s than they did before, and if there was a difference in this respect between high and low IT industries. If both these differences existed, it would be a strong indication that IT mattered —  that it was a driving force behind the observed changes in competition.

But how to measure competition?  One common metric is concentration: the extent to which market share is held by a few big firms, rather than many small ones. We also looked at turbulence, or the extent to which firms in an industry jump around in rank order from year to year (If the #5 firm in sales one year is #10 the following year, this is a pretty turbulent industry).

The results of our analyses were clear. High IT industries experienced significantly greater turbulence and concentration growth after the mid 1990s than they did before, and these differences were not as pronounced in low IT industries. The Business Insight article contains graphs that show these differences, and I’ll post and discuss more results here later .  For now I just want to point readers to the article (a paper written for an academic audience that discusses the research design and results in more formal terms is available here) and solicit their reactions.

These dialogues are important, so could I encourage you to participate?

My personal take on it is quite simple. First, the concentration growth. Any industry sector facing increased commoditisation and consequent margin pressure should display the concentration growth characteristics found in the study, as participants strive to hold on to the vanilla while seeking to grow the higher-margin complex; there is always a shake-out and a consolidation. I think this happens in low-, medium- and high-IT-intensity sectors; the difference is that the process is accelerated by IT and therefore more pronounced in the high-IT-intensity sectors.

The sales turbulence is far more interesting. My guess is that it shows how poor we really are in sustaining the value generated by IT investments. This happens for a variety of reasons: suboptimal prioritisation processes; an unwillingness to make the requisite people-process-technology lockstep changes that will crystallise the value; a lack of understanding of the sheer speed of change in the high-intensity competitive environment; inconsistent approaches to IT investments within different divisions of the same firm, creating artificial and unnecessary variances in performance; a cyclical tug-of-war between product and service innovation champions as executives revert to type; the list is endless.

One way of looking at it is that we need process innovation in order to sustain the value generated by IT investment; I think that the greater the IT-intensity of a sector, the more radical the process innovation has to be. For most of us, this is uncharted territory: we haven’t quite grasped the “Don’t Automate, Obliterate” mantras that Hammer was espousing in the late 1980s; instead, we still have this tendency to “pave the cowpaths”, as he termed it. I think this is caused, at least in part, by the battle between professions that Andrew Abbott writes about, as we move gently towards a Wilsonian consilience.

In essence, I believe that the market share concentration growth is a phenomenon that manifests itself in all industry sectors facing commoditisation, and that this phenomenon is accelerated in high-IT intensity industries, an “environmental” effect. On the other hand, I believe that the sales turbulence increase is a self-inflicted wound, showing our relative immaturity in sustaining the value we create. This is not surprising given our ambivalent attitude to such investments, itself a reaction to irrational markets and, sadly, also a response to Carr.

Perhaps I should have left it to Sy and Ella to explain what I think (or maybe Fun Boy Three and Bananarama for younger readers). Do let Andy know what you think, or comment here if you feel lazy, I’ll make sure he gets it.

From “Cease and Desist” to “Proceed with Permission”

Second Life has had its fair share of critics as well as a very generous dollop of media hype. It would not surprise me if you feel you’ve been overexposed to the topic. Nevertheless, you should take a look at GetAFirstLife. Nice to see that parody isn’t dead in either Life.

I was particularly pleased to see this response, ostensibly from Second Life:

This notice is provided on behalf of Linden Research, Inc. (“Linden Lab”), the owner of trademark, copyright and other intellectual property rights in and to the “Second Life” product and service offering, including the “eye-in-hand” logo for Second Life and the website maintained at http://secondlife.com/.

It has come to our attention that the website located at http://www.getafirstlife.com/ purports to appropriate certain trade dress and marks associated with Second Life and owned by Linden Lab. That website currently includes a link in the bottom right-hand corner for “Comments or cease and desist letters.”

As you must be aware, the Copyright Act (Title 17, U.S. Code) contains provisions regarding the doctrine of “fair use” of copyrighted materials (Section 107 of the Act). Although lesser known and lesser recognized by trademark owners, the Lanham Act (Title 15, Chapter 22, U.S. Code) protecting trademarks is also limited by a judicial doctrine of fair use of trademarks. Determining whether or not a particular use constitutes fair use typically involves a multi-factor analysis that is often highly complex and frustratingly indeterminate; however a use constituting parody can be a somewhat simpler analysis, even where such parody involves a fairly extensive use of the original work.

We do not believe that reasonable people would argue as to whether the website located at http://www.getafirstlife.com/ constitutes parody – it clearly is. Linden Lab is well known among its customers and in the general business community as a company with enlightened and well-informed views regarding intellectual property rights, including the fair use doctrine, open source licensing, and other principles that support creativity and self-expression. We know parody when we see it.

Moreover, Linden Lab objects to any implication that it would employ lawyers incapable of distinguishing such obvious parody. Indeed, any competent attorney is well aware that the outcome of sending a cease-and-desist letter regarding a parody is only to draw more attention to such parody, and to invite public scorn and ridicule of the humor-impaired legal counsel. Linden Lab is well-known for having strict hiring standards, including a requirement for having a sense of humor, from which our lawyers receive no exception.

In conclusion, your invitation to submit a cease-and-desist letter is hereby rejected.

Notwithstanding the foregoing, it is possible that your use of the modified eye-in-hand logo for Second Life, even as parody, requires license from Linden Lab, especially with respect to your sale of goods with the parody mark at http://www.cafepress.com/getafirstlife/. Linden Lab hereby grants you a nonexclusive, nontransferable, nonsublicenseable, revocable, limited license to use the modified eye-in-hand logo (as displayed on http://www.getafirstlife.com/ as of January 21, 2007) to identify only your goods and/or services that are sold at http://www.cafepress.com/getafirstlife/. This license may be modified, addended, or revoked at any time by Linden Lab in its sole discretion.

Best regards,

Linden Lab

There’s still hope for us after all. Whichever life we’re in. I find such things refreshing.