More musings about the opensourcing of process

Four PillarsIt’s been a long time since I started out on the Four Pillars journey, sometime in the middle of 2003 I guess. At the time, I suggested that enterprise applications would converge to become one of four “pillars”: Publishing, Search, Fulfilment and Conversation.

Most people got the Search pillar, where all I was stressing was that we would move from deterministic models of looking at information through to probabilistic ones, particularly as we realised the lack of accuracy of the information we were getting via the traditional forms-based or tree-structure approaches. I guess most people “got it” because search had been around for a long time, and everyone was familiar with the concepts.

While this was not the case in the middle of 2003, most people have now “got” two of the other pillars, Publishing and Conversation. RSS readers are common now, people understand the syndication of content; blogs, wikis and IM, the constituents of conversation applications (beyond the traditional e-mail) are also common, even if usage patterns show an inverse relationship with age.

What many people didn’t get was the pillar I tried to describe as Fulfilment. The process by which a customer gets what he wants. Buying a book, booking a train ticket, selling the contents of your garage, eating out, watching a film, whatever.

Fulfilment. A state of feeling that you got what you wanted, where you wanted it, when you wanted it and how you wanted it. A state of satisfaction.

I’ve written a number of posts on why customer experience is the only remaining differentiator in a commoditising world, particularly when guest blogging for Shane Richmond at the Telegraph last November: on the economics of the customer, on customers and differentiation, on customers and predictability.

One of the reasons I chose to work where I work is that everyone here is focused on this issue, on improving the customer experience. And with all this as backdrop, you can understand why I want to figure out ways of improving processes that serve the customer. Giving the customer what he wants, precisely what he wants, as quickly as he wants it.

This is not going to happen unless we get our processes defined right, refined right. So we spend a lot of time trying to work out what that means, making sure we are documenting the processes correctly, that they really do reflect the customer experience. Making sure that we understand where we’re losing time or accuracy. Making sure we understand what we’re going to do to gain back the time and the accuracy.

Which is where the opensourcing of processes comes in. Why Michael Hugos’ post on visibility resonated with me so much, and why I wrote my earlier post on the subject.

Many many years ago, probably in the mid-1980s, I remember reading an article called Don’t Allocate, Isolate. Probably in the Harvard Business Review or the Sloan Management Review. While looking at aspects of cost analysis, the authors were suggesting that enterprises spent too long arguing about who paid and how much, rather than really understanding what was spent and on what.

I feel similarly about processes. We should document rather than analyse or critique them. Just document the reality. There’s nothing magical or differentiating about processes per se, it’s how we execute them that matters. Which means that standardisation of processes should not be that difficult. Unless we want to make it difficult.

Maybe we do the same thing with the time and accuracy aspects of processes. Maybe we should avoid getting into the angels-dancing-on-pinheads syndrome. No finger-pointing, no blame cultures while we do something very simple. Measure how long it takes for the customer to get what he wants. Measure how often we get his want right.

These things need to be incontrovertible truths, not for debate. Truths that cannot be varied by the diktats of the Word-Excel-PowerPoint troika.

Expose the process, as if it were code. Expose the completion times and error rates, as if it were code. Let everyone see them. Even the customer. Particularly the customer. Why don’t we learn from the Fedex model?

Michael Hugos’ Universal Product Codes are elements that can be expressed adequately in the tag-meets-microformat spaces; when we do this, we have the DNA we need to track processes across systems silos as well as departmental silos.

When we expose the simple things, we may start seeing some of the magic of democratised innovation:

The Linus’s Law effect, as transparency attracts eyeballs; the peer ratings effect, as people compete for peer recognition by improving things faster and better; the collaborative filtering effect, as people learn from a people-who-did-this-also-did approach; the Because Effect, as customers flock and stay because of the experience and not with shackles and restraints.

There is so much we can discover as we do this. I’d like to believe that one man’s department is another man’s firm, that processes by themselves aren’t different in different firms, that we just like believing they are.

An aside. First we believe that our processes are different. Then we believe that our processes are differentiating (in order to keep the Emperor Clothed). And we keep on believing it, even though we also believe in open architectures and platform independence and the power of standardisation and Moore’s Law and Metcalfe’s Law and Gilder’s Lemma and the War for Talent and and and.

You know what? Sometimes I think we make enterprise processes behave like vendor-lock bloatware. We work to ensure that any and all efficiency gains are absorbed by Topsy-related process growth.

We must find a way to apply opensource principles to core processes.

11 thoughts on “More musings about the opensourcing of process”

  1. Hi JP,

    I’ve been thinking about open-sourcing of process maps – the high level processes that make up a company or industry and thir related KPIs – for awhile. One of the things that’s been missing is a forum. The technology open-source community has a set of tooling around which the artifacts of the development process can be sent back and forth, and housed.

    In order to facilitate the open-sourcing of process, then, we need to have a centralized way to store the artifacts, and the artifacts themselves need to be structured. Again using the analogy of the technology community, they have standard languages that can be used, and they have standardized the means by which they do version control, etc.

    Lombardi recently released a new SaaS-based process mapping and strategy product, and we think it could form the basis for hosting the very open-sourcing you seem to be implying. This isn’t meant to be product spam on your blog but, rather, to let you know that there is some thinking on what these higher-level process maps need to include (it’s not BPMN!), how you tie processes at a managerial level to goals, KPIs and error states, and how you make it available to a broad community.

    For now, Blueprint (the product name) is for specific accounts (companies) but we’ve thought about setting up open-source forums for specific industries where people across the industry could develop, in true open-source fashion, their core processes and KPIs. This isn’t about “sharing drawings”… it’s about developing definitive, open-source processes at the managerial level.

    Any thoughts?

    Cheers,
    Phil

    ps – I also blogged about these two posts at http://blog.lombardicto.com/2007/03/transparency_an.html

  2. ps – in re your comments about the “customer process” above… if you haven’t, you should check out Patricia Seybold’s latest book, Outside Innovation. She does a good job of bringing the customer.com concepts into more of a process focus…

  3. I have been lurking quietly during this discussion, but Phil Gilbert has now awakened the kraken in me! As is often the case, this is achieved through a single trigger-word; and the one Phil used was “artifacts.” This was enough to convince me that, however simple we may want to this to be, we are up against ontological and epistemological problems that cannot be ignored.

    The best way to approach the ontological issue is through Plato’s “Theaetetus,” that now-notorious dialogue misread by Nonaka as a source for a definition of knowledge. If one reads Plato instead of Nonaka, one discovers that Socrates not only defeats the four definitions of knowledge proposed to him by Theaetetus but also demonstrates that the very concept of knowledge its extremely tightly coupled to three other concepts: memory, description, and (the real kicker) BEING. Both JP (“Just document the reality”) and Phil (with his artifact management) seem to be saying that all we need to do is get the description right. However, because of that tight coupling, getting it right about description is no easier than getting it right about knowledge; and, while none of us want to outsource the problem to philosophers, the dead moose on the table that we cannot ignore is that concept of being. Much of what Socrates has to say in “Theaetetus” has to do with the fact that the “being of process” is a decidedly different ontological beast than the “being of artifact.” This gives Theaetetus a very rough time, and it should give us one too!

    Unfortunately, the literature does not help us very much, whether it is from the distant past or hot off the presses. However, if we choose to focus on information systems, a CACM paper by Hirschheim and Klein (“Four Paradigms of Information Systems Development”), along with the inspiration it draws from the Burrell-Morgan book about sociological paradigms for organizational analysis, offers some guidance. My own approach has been to try to examine the nature of customer experience within a broader framework of MOTIVATED ACTION, since this concept embraces both the needs of the customer and how those needs are satisfied (both of which determine the experience itself). That approach led me to view the Hirschheim-Klein quad-chart with a slightly different set of axes. One axis is concerned with whether or not the motivation is STATE-BASED or TRANSITION-BASED. The other axis addresses whether the action is TASK-BASED or WORKER-BASED. I originally developed this chart in trying to address epistemological questions concerned with the role of information systems in the service economy, but I think it can also be applied to the relationship between such systems and business processes.

    There is a lot more that I can say about this framework. My personal interests have tended towards questions of how both internal and external communication can be made more effective (and the role of technology in achieving that goal); but that exploration does not belong in this particular discussion. So I shall just close by saying that all feedback will be most welcome!

  4. Stephen,

    Excellent comments and much better said than my own. I don’t have time to give these the time they deserve (and I love your 2×2… I need to study on that for awhile). I will say that I deliberately used the term “artifacts” so that I wouldn’t have to be tied down to what the description was… I agree with you that whatever specific description (or you could say, specific artifact) you use, it will be only an approximation of reality. So, because these are social systems, there will be many artifacts that depict “the process.”

    Having said that, the thing I agree most with about JP’s post is that the issue here is “transparency,” not the specific process at any point in time. The manifestation of the process, the snapshot, if you will, is simply a useful model so that we can at some level of granularity begin to understand the linkages between actions and results, so that we can better mold future actions to achieve desired results.

    So just documenting the process as it is, and thereby giving visibility to the process, is more important than either getting that model “perfect” from an “as-is” or a “to-be” state!

    My company is a “business process management vendor.” We compete every day with companies who talk about BPEL and XODL and workflow and this and that execution-oriented stuff. When the real issue is getting visibility into the processes, into the “hidden factories” where work gets done but not very efficiently. We think process excellence isn’t about workflow, we think it’s about visibility. From visibility does come knowledge, as imperfect as it may be using any specific artifact at any specific point in time…

    I hope this makes some sense… and, again, excellent comment!

    Phil

  5. Phil (an JP), I certainly did not intend to OPPOSE the need for description, just to point out that it is harder than any of us would like it to be. What I DO oppose is the proposition that (in the language of my 2×2) one can describe a transition simply by describing its before and after states (although, in fairness to the “wet brain” crowd, I believe Oliver Sacks once wrote a piece about the movie-camera-in-the-brain, i.e., motion sensed as a sequence of “frames”). Another way of putting this is that, while we are really good at grounding descriptions in nouns and noun phrases (which are basically the building blocks of any database schema), we (or at least information systems) are sadly deficient when we have to account for verbs (particularly with their subtleties of tense, voice, mood, and what not) and adverbs (don’t get me started)! As a curious coincidence, I happened to be writing in my own blog today about writing about processes, suggesting that the best examples could be found in writing about the dance (check out the Arts pages of the TELEGRAPH), sports (in the United States the earlier generations did a much better job, particularly before television intruded), and, for those who want a book-length example and are not offended by bullfighting, DEATH IN THE AFTERNOON. From this we may have to conclude that “documenting the process” effectively requires a LITERARY skill that we have thus far overlooked.

Let me know what you think

This site uses Akismet to reduce spam. Learn how your comment data is processed.