Yup, it’s time for another Very Provisional Post.
There’s something I don’t get about IT and ROI. Something fundamental. And that thing is: How can we possibly use the tools of a very old paradigm to solve the problems of a very new paradigm?
I guess this is something I’ve been musing about for fifteen years, after reading Paul Strassmann’s The Business Value of Computing.
I guess this is something I’ve wrestled with every time I’ve had to stand up and be counted during budget rounds at the various institutions I’ve worked in. And I’ve been in many such rounds, particularly since 2001, where the tone of the budget discussion was “Go South, Young Man“. And I wasn’t that young either.
I guess it is what was at the back of my mind when I read Nicholas Carr’s article in the Harvard Business Review in 2003, when I read his book a year later, and even when I spent time discussing various aspects of the issue with Andrew McAfee.
I guess I’m getting stupider as I grow older. You see, what gets me is this:
Ever since I read the Strassmann oeuvre, I’ve watched computing grow more distributed, more networked; I’ve seen a move towards more “enterprise architecture”, more middleware, more platforms. I’ve watched a substantial increase in complexity.
This increase in complexity manifests itself in many ways:
- requirements capture has gotten harder as we made the historical silos merge and coalesce
- estimation has gotten harder, since everything now connects with everything else
- testing has gotten harder, particularly regression and end-to-end testing
- delivery has gotten harder and slower as silo spaghetti entangled us
- fault replication has gotten harder, and as a consequence so has bug-fixing
- and everything has gotten harder as the enterprise boundaries began to extend and even disappear
As IT professionals, we’ve recognised this and tried to simplify the chessboard, exchanging pawns, pieces and even queens:
- using component architecture and reuse to speed up delivery
- using publish-subscribe bus architectures and adapter frameworks to reduce the number of interfaces
- using time-boxing to ease requirements gathering
- using fast iteration models to make the gathering process more accurate
- using increasing standardisation and rationalisation to simplify all this
- using consolidation, virtualisation and service orientation to derive at least a modicum of value out of Moore’s Law during all this
- using agile methods in general to speed up all of this
I’ve watched all this happen, watched us learn. But.
During all this time, I haven’t really seen changes in the way we account for our IT investments and expenditures. I’ve seen papers about changes, particularly those suggesting a move towards option theory; I’ve seen articles about such changes: I particularly liked the SMR proposition of Big Bets, Options and No-Regrets Moves. I’ve taken part in long arguments about the processes we use to price and value investments in IT.
But, unlike the IT environment during that period, I haven’t really seen changes in the way we measure the ROI of IT. Just 50-year old lipstick on 500 year old pigs.Â
This was a problem in 1987. A bigger problem in 1997. And it’s an absolute killer in 2007.
You see, we’ve moved on. There have been various convergences, convergences of standards, of techniques, even of devices. The opensource community has had its effect, commoditising aggressively up the stack. We’ve seen telephony become software, we’ve seen the disaggregation and reaggregation of hardware, software and services. [Much of my disagreement with Carr is about timing, not direction. ].
Today we have a new challenge. What Doc Searls calls The Because Effect.
In the past, we could claim there was a direct causal relationship between the investments made in IT and the returns, positive or negative. We had siloed systems so we somehow managed to shoe-horn what we did into 15th century mindsets. As everything became more connected, we couldn’t find the causal relationships any more, so we started wondering whether Strassmann et al were right. Yet we knew they couldn’t be, we could sense the productivity gains, the cycle time gains, the quality gains, even if they were later sacrificed. After all, there were many sacrificial altars: vendor lockin, vendor bloat, the politics of projects, the tragedies of e-mail and spreadsheet, the system of professions.
Last week I was at a conference where there was much discussion about agile methods, and the issue of agile-versus-cumbersome-accounting came up. You know something? I’ve yet to work in a place where people were happy with the finance system. Ever. This, despite finance being one of the first places to be “automated”. I don’t wonder why, I know why. Just ask Sig.
Now things will get harder still. The Because Effect is something we live with already. We make money with X because of Y. X and Y aren’t unknowns we’re solving for. In many cases, Y is a commoditising infrastructure which enables or disables our ability to derive value out of X, the edge application.
Using traditional ROI techniques, we may drive investment away from both X as well as Y over time, as we continue the shoe-horning madness. That’s why I read what McAfee and Brynjolfsson researched, why I read what Carr researched. Our measurement tools aren’t up to the job. And the consequences could be tragic.
Just musing. And looking forward to the comments and flames.
Like this:
Like Loading...