Failing at the edges of the network

David Smith, one of the first people to comment on my blog, remains on my everyday read list. Recently I noticed he linked to something I’d written on risk management, and I moved via his blog to Bruno Guissani ‘s commentary on Aula2006, including his coverage of Clay Shirky‘s session.

There are some real gems in the two posts I’ve referenced above, I urge you to read them. One that struck a particular chord for me was the following, from Bruno’s piece:

Shirky’s argument goes like this: when you explore really new ideas, it’s pretty much impossible to tell in advances the successes from the failures. The business world today is geared towards “optimizing” the innovation processes in order to reduce the likelihood of failure. That’s a significant disadvantage when compared with the open-source ecosystem, which “doesn’t have to care” and “can try out everything” because “the cost of failure is carried by the individuals at the edges of the network, while the value of the successes magnifies and adds value to the whole network”. “Ecosystems such as open-source get failure for free, and that produces some inevitable unexpected big successes – the Linux operating system – that nobody could have predicted but end up changing the world”

The cost of failure is carried by the individuals at the edges of the network, while the value of the successes magnifies and value to the whole network. Sometime ago I commented that opensource people tend to solve problems first and foremost rather than develop complex business models. I think that what Clay says is more articulate and far more elegant.

If you disaggregate the cost of failure it will drop. If you reduce the cost of failure then you increase the capacity to innovate. If the innovation is carried out by individuals at the edge then those costs drop as well. As all these costs drop there is a natural speeding-up. A lovely virtuous circle with the right feedback loops.

My thanks to David and Bruno and Clay, they’ve given me a lot of good food for thought.

13 thoughts on “Failing at the edges of the network”

  1. In response to your theme lately of the dangers in risk aversion and failure avoidance I can’t resist leaving the comment I made to my son after he experienced a disheartening setback:

    No one ever learns from success, only from failure.

  2. Great food for thought as you say JP and with which I inherently agree.

    I have however been wondering lately if there is any downside to this. If we completely minimise the cost of failure, does this lead to over-innovation (ridiculous phrase I know) which clutters the arena and increases ennui and adoption resistance?

    I know I’d err on the side of maximising innovation but I’ve not seen this potential stumbling block discussed anywhere (except amongst techno-luddites). Anyone have any thoughts?

  3. John, if you subscribe to the principle of punctuated equilibrium, then it follows as a corollary that what you call over-innovation is fundamental to the population dynamics of natural selection. (This was demonstrated most vividly by the fossil forms in the Burgess Shale, the primary topic of Stephen Jay Gould’s WONDERFUL LIFE.) On the other hand it is unclear that you want to invoke the population dynamics of natural selection when you are dealing with the magnitude and temporal scales of an enterprise; and that caveat makes me cautious about ANY form of optimization-based thinking in the management of any organization, large or small. We resort to these paradigms, because they are what we know how to do; but that just puts us in the same camp as the drunk looking for his keys under the lamppost because that is where the light is good. Indeed, even trying to reduce failure to its cost may be dangerous, because, like it or not, cost is not always the best abstraction of consequences, particularly when the dynamics of your system are non-linear and the consequences of failing at the edges may be felt at the center (the dark side of the “butterfly effect”).

    I just got done reading the Freeman Dyson piece in the latest NEW YORK REVIEW, based on lectures he gave in 2004. The opening sentence is: “It has become part of the accepted wisdom to say that the twentieth century was the century of physics and the twenty-first century will be the century of biology.” The rest of the article explores scenarios that would support this proposition. However, while most of Dyson’s scenarios involve eventual roles for biotechnology as the new paradigm for solving the world’s problems (poverty being highest on his list), I think he missed out on the real paradigm shift that is at stake.

    The key to that paradigm shift lies in the work of Carl Woese, whom he cites at the beginning of his article. The most important take-away from Woese’s work is that the shift away from physics entails a shift away from the reductionist thinking of physics (the source of the sort of thinking that tries to abstract consequences into costs). When Woese speculates on a “new biology,” he is thinking about (in Dyson’s words) “the obsolescence of reductionist biology has it has been practices for the last hundred years” (which is to say under the paradigm of physics as “normal science”). It remains to be seen just how the paradigm can (or will) shift; but, when we view agricultural practices through the lenses that Dyson offers, the shift may be in the direction of the sort of “living systems” theory that resulted in a 1000-page book by James Grier Miller back in 1978. This is a paradigm for dealing with an entire ecology with lots of tight couplings (and lots of loose ones, too). Miller (not cited by Dyson) tries to develop a paradigm that can scale from the level of the cell to the level of what he called “the supranational system” (thus anticipating globalization); and two levels below that top level is where he situates the “organization.”

    To try to invoke the complexity of living systems to get away from the positivist paradigm of physics-based reductionism may be little more than invoking a new wave of romanticism to counter Enlightenment thinking; but this has always struck me as a blog that tries to stay one step ahead of the next paradigm shift. Miller, Woese, and Dyson all seem to be thinking about the world at large, rather than the world of the laboratory bench; and it is comforting to see Dyson give so much attention to the problem of poverty, even if I do not agree with everything he says. Those of us in the more mundane world of enterprise information systems might do well to take a cue or two from his current thinking!

  4. Stephen – wow thanks for the lengthy response – much food for thought there too and much that is frankly out of my league. My response I fear will be far less erudite.

    I’m all for market forces/natural selection – as I said I’d always err on the side of maximising innovation – but I think my “concern” derives from the increased rapidity of the evolution of new “species/innovations.” The market will eventually clear, of course, but my nagging doubt (no more than that) is whether the time-lags will collectively become so large that efficiency is threatened? A contemporary, albeit trivial, example was a piece in the Times this week decrying the “benefits” of choice and highlighting the fact that Tesco’s (big UK supermarket) stocks 26 varieties of milk.

    Living in beta is great but I already see the sort of adoption-resistance to which I referred developing in the social networking area where new products are launched seemingly daily. They may be better, but people have become overwhelmed and won’t consider them. The message then is be quick with your innovation rather than being slightly slower but better.

  5. John, you may find The Paradox Of Choice by Barry Schwartz of interest in this context. It’s a bit pop, but a reasonable intro to the tyranny of choice, to anchors and frames, to loss aversion and related subjects.

  6. JP Thanks – I read it long ago and am minded to go back to it. And for sake of correctness – I was wrong – it’s 38 varieties of milk. That’s varieties not sizes.

  7. John, I think it is important to recognize that efficiency is not a “design priority” in those “living systems” that Miller examined. Robustness is more important, and this is usually achieved through such inefficiencies as redundancy. In the context of enterprise software, this reflects back on the pioneering text in decision support systems by Keen and Scott Morton, who emphasize as early as Chapter 1 that “effectiveness” is more important than efficiency when enterprise operations are at stake. However, as you read their case studies, you realize that it can be very difficult to pin down criteria for effectiveness. This is why I view going for efficiency as a variation on the drunk who looks for his keys under the lamppost where the light is better!

    Nevertheless, we cannot dismiss the time-lag problem. There are any number of clever observations about ignoring how much “real” time is required to arrive at results. My favorite is still from John Maynard Keynes: “In the long run, we’re all dead.”

  8. Stephen

    Good point – I must remember to be exact with my words on this of all blogs. Effectiveness is undoubtedly what I had in mind.

  9. I agree with the assertion intuitively and the Open Source movement is a worthy exemplar. I wonder if the empirical data on venture capital investments during the dotcom boom might support this assertion. There is a general feeling that the VCs know when to pull the plug on a project (minimizing the cost of failure) while companies continue to invest hoping to salvage innovations even when the early signs are blatantly obvious. I will check around to see if there are broad-based empirical studies in this area.

Let me know what you think

This site uses Akismet to reduce spam. Learn how your comment data is processed.