Wond’ring Aloud

Wond’ring aloud/how we feel/today

Jethro Tull, Wond’ring Aloud (Ian Anderson). From the album Aqualung.

Photo courtesy Patryk Pigeon

From late 2005 on, there was a very interesting discussion about Web 2.0 and SOA. John Hagel, Nicholas Carr, Andrew McAfee and Dion Hinchcliffe were involved, amongst others. To refresh your memory (or to make it easier for you in the event you hadn’t actually come across the debate, here are some of the key links:

Web 2.0 for the enterprise?

SOA versus Web 2.0?

Enterprise 2.0: The dawn of emergent collaboration

The web services schism

I was Global CIO at Dresdner Kleinwort at the time, and found the debate both timely as well as very relevant to the challenges we faced. Across the industry, the promise of high cohesion and loose coupling propounded by the web services revolution and SOA seemed to be somewhat remote, more standards-wars than design-principles in character; the expectation of a small-pieces-loosely-joined outcome seemed more and more unlikely to be met as a result, as work backlogs grew; those organisations that had implemented enterprise buses seemed to be affected less than those that hadn’t, but it still wasn’t pretty; everywhere we looked, there were variants of vertically integrated stacks, benighted in the belief that transaction costs would actually tumble as a result. While we were using a number of Web 2.0 technologies at the bank, they were not integrated with the transactional side of the bank, in terms of research and trading, and still some distance away from the back office operations.

It was around that time that we were learning more about how open multisided platforms could work, piggybacking on what the opensource community were doing, and, despite Stallman’s warnings to take care with the term, people started talking about software ecosystems. And that got me thinking more about the transaction costs aspect of these architectures.

Over time, what appeared to be happening was that SOA dominated the traditional “back office” and “transaction processing” worlds, while “Web 2.0” approaches were used to deal with customer-facing applications. Now this was just anecdotal evidence, nothing deeply scientific about it…. but the schisms spoken of by Hagel and Carr and Hinchcliffe et al were becoming more visible. I’d already nailed my colours to the mast, by proposing that search, subscription, conversation and fulfilment were the “four pillars” of enterprise software around that time, so I was comfortable with what was happening. But I was still keen on understanding more about why, and wanted to do this in the context of transaction costs.

For some years, I had been playing with models for managing systems estate change; I was particularly keen on a principle I called “Spectrum”, where I could visualise the firm’s architecture as a series of loosely coupled layers, at one extreme touching the customer, and at the other touching the darkest denizens of back office operations. The idea was to colour-code clusters of systems using the visible spectrum, while declaring what happened on customer desktops “ultraviolet” and what happened at exchanges and payment mechanisms “infrared”. In between, “violet” represented apps that touched the customer, a layer exhibiting rapid change, and “red” represented accounting apps, a layer exhibiting glacial change. And everything in between to cover pre-trade, trading, post-trade, risk and settlement. The idea behind “Spectrum” was that an app could only be changed at the pace consistent with the layer it inhabited; it could ask for change in layers “above” it, layers that exhibited faster propensity to change, but had no right to request speedy changes to apps in layers “below” it; apps in each layer had to respect the rate of change associated with apps in slower layers. As a consequence, in my then utopian style, I had hoped to minimise the regression-testing logjam of enterprise architecture; we’d already avoided Spaghetti Junction by going for a bus architecture rather than point to point interfaces.

What I’d established in my own mind was a growing belief that the issue was to do with rates of change and costs of change. Vertical integration paid off when the rate of change was low. Networked small-pieces approaches paid off when the rate of change was high.

And then the time came to move on from the bank, and the challenges I faced were different. Telcos were very much about stacks rather than ecosystems; enterprise buses were rare; and open multisided platforms too outrageous to consider (though we did!). But the debate of integrated stack and SOA versus ecosystem and Web 2.0 continued to intrigue me. [Now I know that’s an oversimplification, that SOA should really be about a set of design principles rather than explicit technical implementations and reference architecture, but what I saw was largely less of the former and more of the latter.]

Fast forward to a couple of years ago, and Clay Shirky. Clay (like John, Nick and Dion, someone I read regularly) wrote something about the collapse of complex companies, and referred to the work of Joseph Tainter in the process. While I’d heard of Tainter, I hadn’t read his work in depth, and I proceeded to dig into The Collapse of Complex Societies. [It was a subject I’d been mesmerised by since youth].

That led on to my finding other pieces by Tainter, including the diagrams below:

The first, above, looks at productivity of the US Healthcare system between 1930 and 1982. (They define productivity index as life expectancy divided by the ratio of health expenditure to GDP). [I must admit I was reminded of this chart when I came across the Hagel/Seely Brown/Davison Big Shift thinking a year or two ago.]

The second, which actually occurs earlier in Tainter’s paper, seeks to model diminishing returns to increasing complexity. Both diagrams are taken from Tainter’s Complexity, Problem Solving and Sustainable Societies, 1996.

And so to today. Development backlogs are endemic, as the sheer complexity of the grown-like-Topsy stack slows the process of change and makes it considerably more expensive to change. The stack has begun to fossilise, just at the time when businesses are hungrier for growth, when the need to deliver customer-facing, often customer-touching, applications is an imperative.

Which makes me wonder. What Tainter wrote about societies,  what Shirky wrote about companies, are we about to witness something analogous in the systems world? A collapse of a monolith, consumed by its own growth and complexity? As against the simpler, fractal approach of ecosystems?

Just wond’ring. I will probably start taking a deeper look at this; if any of you knows of references worth looking into, please let me know.

29 thoughts on “Wond’ring Aloud”

  1. Interesting post JP. Jim Shepherd of Gartner (ex AMR) recently produced a brilliant piece of reesearch called pace layered model. Somewhat on similar lines to what you are saying, having layers which are reacting to various paces of change. In his model, he has 3 layers – systems of record (the old ossified monoliths), systems of differentiation and systems of innovation. He borrowed the crux from a similar model used by building architects.

    I came across this research a few weeks back. Interestingly, we have rolled out a 2 layer model – a Systems of Engagement (SoE) layer on top of the SoR layer. This has been inspired by our Future of Work strategy developed by Malcolm Frank (CTSH) and Geoff Moore (crossing the chasm fame).

    You may want to look at Jim’s research.

  2. Ned Lilly’s ERP graveyard scorecard illustrates how many fossilized systems are still in maintenance. At the end of a bad quarter vendors don’t need too much customer or sales arm-twisting to sell another ‘sunset’ license, further prolonging old ossified monolithic systems. http://www2.erpgraveyard.com/tombs.html

    Perhaps to answer the collapse question:

    An analogy with Niall Ferguson’s: Will Debt Trigger US Collapse? — Will Technical Debt Trigger a US Application Vendor Collapse? http://youtu.be/rPBp3e6t7Ik

    Even if I’m wrong…what gets squeezed is new software R&D…
    Jethro Tull, Wond’ring Aloud… apt.

  3. These observations get me rethinking the consequences of Classen’s law, which is about the relationship between complexity and utility, but only for a single dimension of complexity e.g. it doesn’t deal with the complexity of connected systems.

    You might also want to check out what Roger Sessions has been doing in this space.

  4. From what I can see, Shepherd’s pace-layer model is very similar to what I was modelling as Spectrum at DrKW. Didn’t find any papers, just conference blurb. And yes, I am familiar with Geoff Moore’s work re SoE and SoR; well before that was written, we’d been solving it with Chatter and the apps, secure identity and entitlement-based data access across that join. Interested to hear more about what you guys have done in this respect, what can be learnt.

  5. Clive, I like the Niall Ferguson analogy, must put it through its paces. But this is more than just “technical debt” we’re talking about, to get to the stage where the marginal cost of change to a stack exceeds the marginal utility

  6. Chris, absolutely. Hadn’t come across Classen’s Law in this context. Geoffrey West at Santa Fe spoke of something very similar to Classen’s Law in the context of cities, scaling and sustainability. I had the opportunity to spend time with him at TED, and there is definitely something analogous there.

  7. Yes your spectrum model from the way you described it seems similar to pace model. I had a conf call with Jim. He has a few Gartner articles published. You may want to read them.

    Don’t know what’s the best way to share with you what we have done. I’ll be in sfo on 26/27 later this month. If you are going to be there we can meet and talk.

    Yes I’m somewhat familiar with your work on Chatter as a SoE layer.

  8. yogis describe consciousness as pure simplicity … as the human race becomes more conscious, it follows that traits such as the integration of opposites, lessening of information arbitrages and hierarchies will become more obvious.

    that ain’t academic though, so no papers or tenure from it, just the direct experience of pure being. :-)

  9. I think the idea that any layer has a “natural” rate of change is a false assumption. A layer needs to change at the rate needed to accommodate the change occurring around it. It just so happens that – typically – customers change faster than core business models. So, there’s an assumption that the layers coupled to the customers must always change faster than the layers coupled to the back office.

    While this holds true in the steady state, by acting on this you’re falling right into the trap of the Innovator’s Dilemma: when someone comes up with a disruptive business model on the backend, the exist players (who all made the assumption that the backend business process would always change slowly) are left trying to figure out what why they can’t compete anymore.

    So, my (somewhat contrarian) assertion would be that you should design for rapid change at every layer (even if you never decide to take advantage of the ability to rapidly change) – it’s your insurance policy for getting left in the dust by a disruptive competitor.

  10. Thank you Walter; Tom and I are on some mail lists together, and I will ping him about what he’d written…. appreciate the heads up

  11. Dan, I like the principle that you should design for rapid change at every layer. but all layers are not created equal from a cost perspective, the licence models, architecture and skillset costs vary. So the cost of change varies between layers. From an investment perspective, people are less likely to carry out major overhauls to their network OS or their accounting or settlement systems weekly or monthly. As a result, The premise behind spectrum is that there are two kinds of changes, changes within layer and changes between layers. All layers could change at the same rate for “within layer” changes. If changes created dependencies or consequences on other layers, then some vectoring needs to happen, some way of directionally differentiating between cheap change and expensive change, between slow change and fast change. Does that make sense?

  12. Great piece JP, and I think it’s a worthwhile discussion to bring back. We’ve learned so much in the 5 years since that discussion took place. The broad outlines for networked-based businesses that naturally respond to change have emerged. While we’re still in lesson mode, I think you’re on the right track. For my part, I put some notes together in response to this post on what I think we’ve learned about how to create next-generation enterprises.

  13. We tend to see complexity as value-added and we need to see complexity as value-reduced. If there are two ways of doing the same thing, the simplest is almost always better. And in those rare cases when it isn’t better, it is usually because we didn’t understand what we were doing in the first place. I have written a number of white papers on this, my most recent (and shortest!) is Small and Simple; Keys to Reducing IT Risk. It is available at http://www.objectwatch.com/white_papers.htm#risk (no registration required.)

    One problem we have is that people confuse complexity in the problem space (unavoidable and sometimes even desirable) with complexity in the solution space (avoidable and never desirable.)

  14. I very much like both the article, and the observation made by Dan Foody. The terrific book by Stewart Brand (How Buildings learn) illustrattalsk about the chasm between architects (proper architects. not people like us!) and interior designers. How they were often at each others’ throats. Frank Duffy introduced the notion that there really isn’t such a thing as a building. “A properly conceived building is several layers of longevity of built components”. It is the shearing forces across these components that gives rise to tensions/difficulties of change. Duffy uses “Shell”, “Services”, “Scenery”, and “Set” as his layers. And he does make the observation that they do have different longevity. Shell ~ 50 years (UK), Services ~15 years, Scenery ~ 5 years, Set ~ 1 year or less.
    The impacts can be very profound – a scenery level change, in response to a law that states “every employee must have access to a window” can have quite an impact on the Services and Structure layers underneath.
    So looking at and dealing with rate of change of layers is, I believe, terribly important. However, when we do have the opportunity to do something about the deepest layers, we absolutely should take into account the best available principles and materials

  15. Great observations, Roger. I like both those constructs, viewing complexity as value-reducing, particularly when it migrates from the problem space to the solution space. It’s the sort of argument that makes me rail about “region coding” on DVDs and suchlike, where the incumbents chose to migrate historical business processes that were region-constrained into modern delivery processes.

  16. Thanks Chris, I’m a big fan of Stewart Brand, as also Jane Jacobs and Christopher Alexander. Each in turn has influenced my own thinking about systems architecture…..

  17. Thank you Nick. And yes, I’d love to find out more as to how I can participate in what you’re doing in this respect…..although you may change your mind after reading my next post on this….. :-)

  18. Starting with Roger’s interesting observation. “Complexity as value-added?” That seems to point to a suggestion that “complexity” (unnecessary complexity) makes its way into a solution much earlier than I would have expected. We do, of course, have to be diligent that we don’t overcomplicate at all stages, but if “complexity” is perceived somehow as a value, then it will somehow appear in the value statements at the beginning of the “project”. Maybe in a VPEC-T (Nigel Green/Carl Bate) kind of way, there ought to be some discussion/statement of Value and Trust relationships much earlier in the process. Remembering that not all value is monetary, and that not everyone/everything is trustworthy.
    It isn’t just the feature functions and their interraltionships we have to worry about, we have to layer the “-ilities” too. For example, when we build ships, we build bulkheads that can be sealed off, so the ship won’t sink if some part of the hull is ruptured. We design and build to “acceptable risk”. If the things we are building can be life threatening, we take different care than if we are just doing something simple – like making a shopping list.

  19. I remember listening to John Zachman in Sydney, a few years back. He was talking about legacy systems and the laws of thermodynamics. E.g. as more and more development (energy) gets invested over time within a system, the less and less business value is emitted and eventually the system implodes through entropy. There is also the ‘House of Windsor’ metaphor as used by Stephen Spewak in his classic work ‘Enterprise Architecture Planning’ .

Let me know what you think

This site uses Akismet to reduce spam. Learn how your comment data is processed.